X-Git-Url: http://git.cascardo.eti.br/?a=blobdiff_plain;f=15net%2Fnet;h=491e84b0f313e831a135d9a234dea4b4694cf139;hb=f4619770cba88e3f6451fba46016f91ef666f4a9;hp=4e0b7e7702e2a3f7c1a7e02f4fcb27bcac30dfcc;hpb=8a090421baae2f8d105e6338c8f8f4b11e8a2346;p=cascardo%2Fkernel%2Fslides%2F.git diff --git a/15net/net b/15net/net index 4e0b7e7..491e84b 100644 --- a/15net/net +++ b/15net/net @@ -91,8 +91,53 @@ * Current code already updates last time packet was transmitted * Driver should set watchdog\\_timeo and ndo\\_tx\\_timeout +# Reception + +* Usually happens in an interrupt handler +* Driver allocates skb: some drivers allocate it when setting up and arrange the + device to write into the buffers directly +* Must set skb field protocol: easily done for ethernet drivers with + eth\\_type\\_trans +* Finally, call netif\\_rx + +# NAPI + +* To allow more performance, NAPI introduces polling, avoiding too much + interrupts when load is high +* Driver disables interrupts and enables polling in its interrupt handler + when RX happens +* Network subsystem uses a softirq to do the polling +* The driver poll function disables polling and reenabled interrupts when it's + done with its hardware queue + # NAPI +* struct napi\\_struct +* netif\\_napi\\_add(dev, napi, poll\\_func, weight) +* napi\\_enable: called in open +* napi\\_disable: called in stop - awaits completion +* napi\\_schedule + - napi\\_schedule\\_prep + - \\_\\_napi\\_schedule +* napi\\_complete: called in poll when all is done +* Use netif\\_receive\\_skb instead of netif\\_rx + +# NAPI step by step + +* In the interrupt handler: + - Checks that the interrupt received is RX + - Call napi\\_schedule\\_prep to check that napi isn't already scheduled + - Disable RX + - Call \\_\\_napi\\_schedule + +# Weight and Budget + +* The weight is the start budget for the interface, usually 16 +* The poll function must not dequeue more frames than the budget +* It must call napi\\_complete if and only if it has exhausted the hardware + queues with less than the budget +* It must return the number of entries in the queue processed + # Changes in net device * Use netdev\\_priv, no priv anymore @@ -104,6 +149,10 @@ # Other recent changes * Some members moved to netdev\\_queue to increase cache-line usage -* GRO/GSO -* Multi-queue support -* RPS - Packet Steering +* GRO/GSO - Handle hardware checksum acceleration +* Multi-queue support, for devices with multiple queues, so they can be handled + in different CPUs +* RPS - Receive Packet Steering, which distributes protocol processing amongst + multiple CPUs in case of a single device, single queue system +* RFS - Receive Flow Steering, which tries to handle the packet in the CPU where + the application is running