Bits and pieces.

KVM Memory Latency

I’m recently dealing a lot with performance adjust- and measurements for flowShield v3. Interestingly, I’ve figured out, that KVM adds some serious latency to memory operations, which is problematic, as it means increased latency and at some point packetloss once for user- and kernelspace networking applications.

Lets imagine, you have the following piece of C Code to get the current Timestamp as unsigned 64bit integer in microseconds:

/* Get usec timestamp */
uint64_t GetTimeStamp() {
    struct timeval tv;
    return tv.tv_sec*(uint64_t)1000000+tv.tv_usec;

Now, your code deals with hashtables, using a possible hashbucket size of 8000 entries (which is enough for some kind of usage):

int main() {
        uint64_t call_start=GetTimeStamp();
        uint64_t runtime;
        insert_hasht(7656, 48740293, 27015, 123, 30, 123);
        if(hasht_entity != NULL) {
                printf("Some Entry: %u\n",hasht_entity->ip);
        printf("runtime (microseconds): %u\n",runtime);
        return 0;

Now, the fun part comes:

jhofmann@naya:/home/jhofmann/git/tests# gcc test.c -o ht_ns_test
jhofmann@naya:/home/jhofmann/git/tests# ./ht_ns_test
IP: 48740292
runtime (microseconds): 62

Executing exactly the same code within a KVM guest drastically drops in performance:

jhofmann@kvm:/home/jhofmann/git/tests# ./ht_ns_test
IP: 48740292
runtime (microseconds): 88

We got a difference of about 16 nanoseconds, on almost identical machines (both Debian Buster using the same kernel).

Personally, I’m not a fan anylonger of virtualization for high performance packet processing.

Leave a Reply

Your email address will not be published. Required fields are marked *