{"id":9156,"date":"2024-05-10T17:01:01","date_gmt":"2024-05-10T09:01:01","guid":{"rendered":""},"modified":"2024-05-10T17:01:01","modified_gmt":"2024-05-10T09:01:01","slug":"\u7f51\u5361\u9a71\u52a8\u6536\u5305\u4ee3\u7801\u5206\u6790\u4e4b page reuse","status":"publish","type":"post","link":"https:\/\/mushiming.com\/9156.html","title":{"rendered":"\u7f51\u5361\u9a71\u52a8\u6536\u5305\u4ee3\u7801\u5206\u6790\u4e4b page reuse"},"content":{"rendered":"

\n <\/path> \n<\/svg> <\/p>\n

\u6700\u8fd1\u5728\u5b66\u4e60 Intel \u7684 igb kernel driver \u7684 Rx page reuse \u90e8\u5206\uff0c\u5b66\u4e60\u7ed3\u675f\u4f5c\u4e00\u4e2a\u603b\u7ed3\u3002\u53ef\u80fd\u6587\u7ae0\u7684\u5185\u5bb9\u4f1a\u6709\u4e00\u4e9b\u4e0d\u51c6\u786e\u7684\u5730\u65b9\uff0c\u671b\u6307\u6b63\u3002<\/p>\n

page reuse \u5c31\u662f\u901a\u8fc7\u5728\u521d\u59cb\u7684\u65f6\u5019\u5206\u914d\uff08num_rx_desc - 1\uff09\u4e2a page\uff0c\u4e4b\u540e reuse\uff0c\u8fd9\u6837\u5c31\u4e0d\u7528\u50cf\u4ee5\u524d\u4e00\u6837\u4e00\u76f4 alloc skb\uff0c\u53ef\u4ee5\u5b9e\u73b0\u4f18\u5316\u3002\u5f53\u7136\uff0c\u5728 reuse fail \u7684\u65f6\u5019\uff0c\u8fd8\u662f\u9700\u8981\u91cd\u65b0 alloc \u7684\u3002\u8be5\u90e8\u5206\u4ee3\u7801\u5728igb_poll => igb_clean_rx_irq => igb_fetch_rx_buffer<\/p>\n

driver \u7684 tx \u548c rx \u5404\u81ea\u662f\u4e00\u4e2a ring\uff0c\u5305\u542b next_to_use \u548c next_to_clean\uff0c\u8fd9\u90e8\u5206\u5185\u5bb9\u53ef\u4ee5\u53c2\u8003\u8fd9\u7bc7\u6587\u7ae0\u5b66\u4e60\u4e00\u4e0b\u7f51\u5361\u9a71\u52a8\u6536\u53d1\u5305\u8fc7\u7a0b\uff0c\u6211\u540e\u9762\u7684\u753b\u56fe\u5206\u6790\u4e5f\u6709\u53c2\u8003\u5b83\u3002\u800c igb \u91cc\u9762\u4e3a\u4e86 page reuse \u53c8\u5f15\u5165\u4e86\u4e00\u4e2a next_to_alloc\u3002\u8fd9\u4e09\u4e2a\u53d8\u91cf\u9700\u8981\u6ce8\u610f\uff0c\u662f\u6838\u5fc3\u3002\u5982\u679c\u60f3\u8981\u5feb\u901f\u4e86\u89e3 page reuse\uff0c\u53ef\u4ee5\u76f4\u63a5\u4ece\u76ee\u5f55\u770b\u7ffb\u8f6c\u9875\u9762<\/strong>\u548c\u6d41\u7a0b\u56fe\u793a<\/strong>\u3002 <\/p>\n

\n

\u76ee\u5f55<\/h4>\n
    \n
  • \u4e00\u3001struct igb_rx_buffer<\/li>\n
  • \u4e8c\u3001igb_clean_rx_irq<\/li>\n
  • \u4e09\u3001igb_fetch_rx_buffer<\/li>\n
  • \u56db\u3001\u7ffb\u8f6c\u9875\u9762<\/li>\n
  • \u4e94\u3001\u6d41\u7a0b\u56fe\u793a<\/li>\n
  • \n
      \n
    • 1. \u4e3a desc alloc buffer<\/li>\n
    • 2. page reuse ok<\/li>\n
    • \n
        \n
      • 2.1 \u4e00\u4e2a packet\uff0c\u4e00\u4e2a desc<\/li>\n
      • 2.2 \u4e00\u4e2a packet\uff0c\u591a\u4e2a desc<\/li>\n<\/ul>\n<\/li>\n
      • 3. page reuse fail<\/li>\n
      • \n
          \n
        • 3.1 \u4e00\u4e2a packet\uff0c\u4e00\u4e2a desc<\/li>\n
        • 3.2 \u4e00\u4e2a packet\uff0c\u591a\u4e2a desc<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/li>\n
        • \u516d\u3001\u603b\u7ed3\u8bed<\/li>\n<\/ul>\n<\/div>\n

          \u4e00\u3001struct igb_rx_buffer<\/h2>\n

          \u9996\u5148\u770b\u4e00\u4e0b igb_rx_buffer\uff0c\u8fd9\u4e2a\u7ed3\u6784\u6bd4\u8f83\u7b80\u5355\uff0c\u4e3b\u8981\u662f dma\u3001page \u548c page_offset\u3002\u8fd9\u91cc\u9762\u6bd4\u8f83\u5173\u952e\u7684\u5c31\u662f page_offset\uff0c\u8fd9\u662f\u7528\u6765\u5b9a\u4f4d\u6240\u4f7f\u7528\u7684 buffer \u7684\u4f4d\u7f6e\u7684\uff0c\u540e\u7eed\u4e5f\u7528\u5f97\u5230\u3002\u8fd9\u91cc\u6709\u4e00\u4e2aCONFIG_IGB_DISABLE_PACKET_SPLIT\uff0c\u8fd9\u4e2a\u662f not defined \u7684\uff0c\u540e\u7eed\u4e5f\u6709\u6d89\u53ca\uff0c\u9700\u8981\u6ce8\u610f\u3002<\/p>\n

          struct<\/span> igb_rx_buffer<\/span> { \n   <\/span>\n\tdma_addr_t<\/span> dma;<\/span>\n#<\/span>ifdef<\/span> CONFIG_IGB_DISABLE_PACKET_SPLIT<\/span><\/span>\n\tstruct<\/span> sk_buff<\/span> *<\/span>skb;<\/span>\n#<\/span>else<\/span><\/span>\n\tstruct<\/span> page<\/span> *<\/span>page;<\/span>\n\tu32 page_offset;<\/span>\n#<\/span>endif<\/span><\/span>\n}<\/span>;<\/span>\n<\/code><\/pre>\n

          \u4e8c\u3001igb_clean_rx_irq<\/h2>\n

          \u8fd9\u4e2a\u51fd\u6570\u662f rx \u7684\u5904\u7406\u51fd\u6570\uff0c\u5373\u6536\u5305\u8fc7\u7a0b\uff0c\u53ea\u662f\u7b80\u5355\u7684\u63d0\u4e00\u4e0b\uff0c\u6bd5\u7adf\u4e0d\u662f\u672c\u6587\u7684\u91cd\u70b9\u3002<\/p>\n

          \/* igb_clean_rx_irq -- * packet split *\/<\/span>\nstatic<\/span> bool igb_clean_rx_irq<\/span>(<\/span>struct<\/span> igb_q_vector<\/span> *<\/span>q_vector,<\/span> int<\/span> budget)<\/span>\n{ \n   <\/span>\n\tstruct<\/span> igb_ring<\/span> *<\/span>rx_ring =<\/span> q_vector-><\/span>rx.<\/span>ring;<\/span>\n\tstruct<\/span> sk_buff<\/span> *<\/span>skb =<\/span> rx_ring-><\/span>skb;<\/span>\n\tunsigned<\/span> int<\/span> total_bytes =<\/span> 0<\/span>,<\/span> total_packets =<\/span> 0<\/span>;<\/span>\n\t\/\/igb_desc_unused return ((ntc > ntu) ? 0 : ring->count) + ntc - ntu - 1;<\/span>\n\t\/\/\u8fd9\u91cc\u9762\u7684 ring->count \u5c31\u662f desc num\uff0ccleaned_count \u5c31\u662f hw \u6ca1\u6709\u4f7f\u7528\u3001\u9700\u8981 clean \u7684 desc \u6570\u76ee<\/span>\n\t\/\/next_to_use \u7684 use \u662f\u6307\u6ca1\u6709\u586b\u5145\u8fc7\u6570\u636e\u5305\u5373\u5c06\u4f7f\u7528\u7684 desc\uff0c<\/span>\n\t\/\/next_to_clean \u7684 clean\u662f\u6307\u5df2\u7ecf\u586b\u5145\u8fc7\u6570\u636e\u5305\uff0c\u5e76\u4e14\u5df2\u7ecf\u5c06\u6570\u636e\u9001\u5f80\u534f\u8bae\u6808\u540e\uff0c\u9700\u8981\u5904\u7406\u7684 desc<\/span>\n\tu16 cleaned_count =<\/span> igb_desc_unused<\/span>(<\/span>rx_ring)<\/span>;<\/span> \n\n\tdo<\/span> { \n   <\/span>\n\t\tunion<\/span> e1000_adv_rx_desc *<\/span>rx_desc;<\/span>\n\n\t\t\/* return some buffers to hardware, one at a time is too slow *\/<\/span>\n\t\t\/\/\u5982\u679c\u9700\u8981 clean \u7684 desc \u8d85\u8fc7\u4e86\u9608\u503c\uff0c\u90a3\u4e48\u5148 clean \u4e00\u4e0b\uff0c\u8fd4\u8fd8\u4e00\u4e9b buffer \u7ed9 hw<\/span>\n\t\tif<\/span> (<\/span>cleaned_count >=<\/span> IGB_RX_BUFFER_WRITE)<\/span> { \n   <\/span>\n\t\t\tigb_alloc_rx_buffers<\/span>(<\/span>rx_ring,<\/span> cleaned_count)<\/span>;<\/span>\n\t\t\tcleaned_count =<\/span> 0<\/span>;<\/span>\n\t\t}<\/span>\n\n\t\t\/\/\u8fd9\u91cc\u7ed9\u5230 next_to_clean<\/span>\n\t\trx_desc =<\/span> IGB_RX_DESC<\/span>(<\/span>rx_ring,<\/span> rx_ring-><\/span>next_to_clean)<\/span>;<\/span>\n\n\t\tif<\/span> (<\/span>!<\/span>igb_test_staterr<\/span>(<\/span>rx_desc,<\/span> E1000_RXD_STAT_DD)<\/span>)<\/span>\n\t\t\tbreak<\/span>;<\/span>\n\n\t\t\/* * This memory barrier is needed to keep us from reading * any other fields out of the rx_desc until we know the * RXD_STAT_DD bit is set *\/<\/span>\n\t\trmb<\/span>(<\/span>)<\/span>;<\/span>\n\n\t\t\/* retrieve a buffer from the ring *\/<\/span>\n\t\t\/\/\u628a\u8fd9\u4e2a next_to_clean \u6307\u5411\u7684 desc \u7ed9\u5230 igb_fetch_rx_buffer\uff0c\u7528\u4e8e reuse\uff0c\u4e5f\u5c31\u662f retrieve<\/span>\n\t\tskb =<\/span> igb_fetch_rx_buffer<\/span>(<\/span>rx_ring,<\/span> rx_desc,<\/span> skb)<\/span>;<\/span>\n\n\t\t\/* exit if we failed to retrieve a buffer *\/<\/span>\n\t\tif<\/span> (<\/span>!<\/span>skb)<\/span>\n\t\t\tbreak<\/span>;<\/span>\n\t\t\/\/\u6bcf\u7528\u5b8c\u4e00\u4e2a desc\uff0c\u8981 cleaned_count++\u3002<\/span>\n\t\tcleaned_count++<\/span>;<\/span>\n\n\t\t\/* fetch next buffer in frame if non-eop *\/<\/span>\n\t\t\/\/\u8fd9\u91cc\u66f4\u65b0\u4e86 next_to_clean\uff0c\u5e76\u4e14\u5224\u65ad\u662f\u5426\u662f end of packet\uff0c\u5982\u679c\u662f\u7684\u8bdd\u5c31 return false\uff0c\u5426\u5219 return true\u3002<\/span>\n\t\tif<\/span> (<\/span>igb_is_non_eop<\/span>(<\/span>rx_ring,<\/span> rx_desc)<\/span>)<\/span>\n\t\t\tcontinue<\/span>;<\/span>\n\n\t\t\/* verify the packet layout is correct *\/<\/span>\n\t\tif<\/span> (<\/span>igb_cleanup_headers<\/span>(<\/span>rx_ring,<\/span> rx_desc,<\/span> skb)<\/span>)<\/span> { \n   <\/span>\n\t\t\tskb =<\/span> NULL<\/span>;<\/span>\n\t\t\tcontinue<\/span>;<\/span>\n\t\t}<\/span>\n\n\t\t\/* probably a little skewed due to removing CRC *\/<\/span>\n\t\ttotal_bytes +=<\/span> skb-><\/span>len;<\/span>\n\n\t\t\/* populate checksum, timestamp, VLAN, and protocol *\/<\/span>\n\t\tigb_process_skb_fields<\/span>(<\/span>rx_ring,<\/span> rx_desc,<\/span> skb)<\/span>;<\/span>\n\n#<\/span>ifndef<\/span> IGB_NO_LRO<\/span><\/span>\n\t\tif<\/span> (<\/span>igb_can_lro<\/span>(<\/span>rx_ring,<\/span> rx_desc,<\/span> skb)<\/span>)<\/span>\n\t\t\tigb_lro_receive<\/span>(<\/span>q_vector,<\/span> skb)<\/span>;<\/span>\n\t\telse<\/span>\n#<\/span>endif<\/span><\/span>\n#<\/span>ifdef<\/span> HAVE_VLAN_RX_REGISTER<\/span><\/span>\n\t\t\tigb_receive_skb<\/span>(<\/span>q_vector,<\/span> skb)<\/span>;<\/span>\n#<\/span>else<\/span><\/span>\n\t\t\tnapi_gro_receive<\/span>(<\/span>&<\/span>q_vector-><\/span>napi,<\/span> skb)<\/span>;<\/span>\n#<\/span>endif<\/span><\/span>\n#<\/span>ifndef<\/span> NETIF_F_GRO<\/span><\/span>\n\n\t\tnetdev_ring<\/span>(<\/span>rx_ring)<\/span>-><\/span>last_rx =<\/span> jiffies;<\/span>\n#<\/span>endif<\/span><\/span>\n\n\t\t\/* reset skb pointer *\/<\/span>\n\t\tskb =<\/span> NULL<\/span>;<\/span>\n\n\t\t\/* update budget accounting *\/<\/span>\n\t\ttotal_packets++<\/span>;<\/span>\n\t}<\/span> while<\/span> (<\/span>likely<\/span>(<\/span>total_packets <<\/span> budget)<\/span>)<\/span>;<\/span>\n\n\t\/* place incomplete frames back on ring for completion *\/<\/span>\n\trx_ring-><\/span>skb =<\/span> skb;<\/span>\n\n\trx_ring-><\/span>rx_stats.<\/span>packets +=<\/span> total_packets;<\/span>\n\trx_ring-><\/span>rx_stats.<\/span>bytes +=<\/span> total_bytes;<\/span>\n\tq_vector-><\/span>rx.<\/span>total_packets +=<\/span> total_packets;<\/span>\n\tq_vector-><\/span>rx.<\/span>total_bytes +=<\/span> total_bytes;<\/span>\n\t\/\/\u8fd9\u91cc\u662f\u5728\u5faa\u73af\u7ed3\u675f\u540e\uff0c\u4e3a desc \u5206\u914d buffer<\/span>\n\tif<\/span> (<\/span>cleaned_count)<\/span>\n\t\tigb_alloc_rx_buffers<\/span>(<\/span>rx_ring,<\/span> cleaned_count)<\/span>;<\/span>\n\n#<\/span>ifndef<\/span> IGB_NO_LRO<\/span><\/span>\n\tigb_lro_flush_all<\/span>(<\/span>q_vector)<\/span>;<\/span>\n\n#<\/span>endif<\/span> \/* IGB_NO_LRO *\/<\/span><\/span>\n\treturn<\/span> (<\/span>total_packets <<\/span> budget)<\/span>;<\/span>\n}<\/span>\n#<\/span>endif<\/span> \/* CONFIG_IGB_DISABLE_PACKET_SPLIT *\/<\/span><\/span>\n<\/code><\/pre>\n
          \/** * igb_is_non_eop - process handling of non-EOP buffers * @rx_ring: Rx ring being processed * @rx_desc: Rx descriptor for current buffer * * This function updates next to clean. If the buffer is an EOP buffer * this function exits returning false, otherwise it will place the * sk_buff in the next buffer to be chained and return true indicating * that this is in fact a non-EOP buffer. * \u8fd9\u4e2a\u51fd\u6570\u66f4\u65b0 next to clean\u3002\u5982\u679c\u8fd9\u4e2a buffer \u662fEOP\uff08end of packet\uff09buffer \u90a3\u4e48\u51fd\u6570\u9000\u51fa\u5e76\u4e14 return false\u3002 * \u5426\u5219\uff0c\u5b83\u5c06\u628a sk_buff \u653e\u5728\u4e0b\u4e00\u4e2a\u8981\u94fe\u63a5\u7684\u7f13\u51b2\u533a\u4e2d\u5e76\u8fd4\u56de true\uff0c\u8868\u660e\u8fd9\u5b9e\u9645\u4e0a\u662f\u4e00\u4e2a\u975e EOP \u7f13\u51b2\u533a\u3002 **\/<\/span>\nstatic<\/span> bool igb_is_non_eop<\/span>(<\/span>struct<\/span> igb_ring<\/span> *<\/span>rx_ring,<\/span>\n\t\t\t   union<\/span> e1000_adv_rx_desc *<\/span>rx_desc)<\/span>\n{ \n   <\/span>\n\tu32 ntc =<\/span> rx_ring-><\/span>next_to_clean +<\/span> 1<\/span>;<\/span>\n\n\t\/* fetch, update, and store next to clean *\/<\/span>\n\tntc =<\/span> (<\/span>ntc <<\/span> rx_ring-><\/span>count)<\/span> ?<\/span> ntc :<\/span> 0<\/span>;<\/span>\n\trx_ring-><\/span>next_to_clean =<\/span> ntc;<\/span>\n\n\tprefetch<\/span>(<\/span>IGB_RX_DESC<\/span>(<\/span>rx_ring,<\/span> ntc)<\/span>)<\/span>;<\/span>\n\n\tif<\/span> (<\/span>likely<\/span>(<\/span>igb_test_staterr<\/span>(<\/span>rx_desc,<\/span> E1000_RXD_STAT_EOP)<\/span>)<\/span>)<\/span>\n\t\treturn<\/span> false;<\/span>\n\n\treturn<\/span> true;<\/span>\n}<\/span>\n<\/code><\/pre>\n
          \/** * igb_alloc_rx_buffers - Replace used receive buffers; packet split * @rx_ring: rx descriptor ring * @cleaned_count: number of buffers to clean **\/<\/span>\nvoid<\/span> igb_alloc_rx_buffers<\/span>(<\/span>struct<\/span> igb_ring<\/span> *<\/span>rx_ring,<\/span> u16 cleaned_count)<\/span>\n{ \n   <\/span>\n\tunion<\/span> e1000_adv_rx_desc *<\/span>rx_desc;<\/span>\n\tstruct<\/span> igb_rx_buffer<\/span> *<\/span>bi;<\/span>\n\tu16 i =<\/span> rx_ring-><\/span>next_to_use;<\/span>\/\/\u65e2\u7136\u662f\u5206\u914d buffer\uff0c\u90a3\u4e48\u5f97\u4ece next_to_use \u5f00\u59cb<\/span>\n\n\t\/* nothing to do *\/<\/span>\n\tif<\/span> (<\/span>!<\/span>cleaned_count)<\/span>\n\t\treturn<\/span>;<\/span>\n\n\trx_desc =<\/span> IGB_RX_DESC<\/span>(<\/span>rx_ring,<\/span> i)<\/span>;<\/span>\n\tbi =<\/span> &<\/span>rx_ring-><\/span>rx_buffer_info[<\/span>i]<\/span>;<\/span>\n\ti -=<\/span> rx_ring-><\/span>count;<\/span>\/\/\u5148\u51cf\u6389 num_rx_desc<\/span>\n\n\tdo<\/span> { \n   <\/span>\n#<\/span>ifdef<\/span> CONFIG_IGB_DISABLE_PACKET_SPLIT<\/span><\/span>\n\t\tif<\/span> (<\/span>!<\/span>igb_alloc_mapped_skb<\/span>(<\/span>rx_ring,<\/span> bi)<\/span>)<\/span>\n#<\/span>else<\/span><\/span>\n        \/\/\u5206\u914d page<\/span>\n\t\tif<\/span> (<\/span>!<\/span>igb_alloc_mapped_page<\/span>(<\/span>rx_ring,<\/span> bi)<\/span>)<\/span>\n#<\/span>endif<\/span> \/* CONFIG_IGB_DISABLE_PACKET_SPLIT *\/<\/span><\/span>\n\t\t\tbreak<\/span>;<\/span>\n\n\t\t\/* * Refresh the desc even if buffer_addrs didn't change * because each write-back erases this info. *\/<\/span>\n#<\/span>ifdef<\/span> CONFIG_IGB_DISABLE_PACKET_SPLIT<\/span><\/span>\n\t\trx_desc-><\/span>read.<\/span>pkt_addr =<\/span> cpu_to_le64<\/span>(<\/span>bi-><\/span>dma)<\/span>;<\/span>\n#<\/span>else<\/span><\/span>\n\t\trx_desc-><\/span>read.<\/span>pkt_addr =<\/span> cpu_to_le64<\/span>(<\/span>bi-><\/span>dma +<\/span> bi-><\/span>page_offset)<\/span>;<\/span>\n#<\/span>endif<\/span><\/span>\n\n\t\trx_desc++<\/span>;<\/span>\n\t\tbi++<\/span>;<\/span>\n\t\ti++<\/span>;<\/span>\n\t\t\/\/\u8fd9\u91cc\u7684\u76ee\u7684\u5c31\u662f\u4e3a\u4e86\u4f7f\u5f97 i \u7684\u503c\u5728 0 ~ 255 \u7684\u8303\u56f4\u5185<\/span>\n\t\tif<\/span> (<\/span>unlikely<\/span>(<\/span>!<\/span>i)<\/span>)<\/span> { \n   <\/span>\n\t\t\trx_desc =<\/span> IGB_RX_DESC<\/span>(<\/span>rx_ring,<\/span> 0<\/span>)<\/span>;<\/span>\n\t\t\tbi =<\/span> rx_ring-><\/span>rx_buffer_info;<\/span>\n\t\t\ti -=<\/span> rx_ring-><\/span>count;<\/span>\n\t\t}<\/span>\n\n\t\t\/* clear the hdr_addr for the next_to_use descriptor *\/<\/span>\n\t\trx_desc-><\/span>read.<\/span>hdr_addr =<\/span> 0<\/span>;<\/span>\n\n\t\tcleaned_count--<\/span>;<\/span>\n\t}<\/span> while<\/span> (<\/span>cleaned_count)<\/span>;<\/span>\n\n\ti +=<\/span> rx_ring-><\/span>count;<\/span>\/\/\u8fd9\u91cc\u518d\u52a0\u56de num_rx_desc<\/span>\n\n\tif<\/span> (<\/span>rx_ring-><\/span>next_to_use !=<\/span> i)<\/span> { \n   <\/span>\n\t\t\/* record the next descriptor to use *\/<\/span>\n\t\trx_ring-><\/span>next_to_use =<\/span> i;<\/span>\/\/\u66f4\u65b0 next_to_use<\/span>\n\n\/\/\u8fd9\u91cc\u662f ifndef\uff0c\u96be\u602a\u7406\u4e86\u534a\u5929\u4e0d\u5bf9 = =<\/span>\n#<\/span>ifndef<\/span> CONFIG_IGB_DISABLE_PACKET_SPLIT<\/span><\/span>\n\t\t\/* update next to alloc since we have filled the ring *\/<\/span>\n\t\t\/\/\u53ea\u8981\u8c03\u7528\u4e86alloc rx buffer\uff0c\u90a3\u4e48next_to_use = next_to_alloc<\/span>\n\t\trx_ring-><\/span>next_to_alloc =<\/span> i;<\/span>\n\n#<\/span>endif<\/span><\/span>\n\t\t\/* * Force memory writes to complete before letting h\/w * know there are new descriptors to fetch. (Only * applicable for weak-ordered memory model archs, * such as IA-64). *\/<\/span>\n\t\twmb<\/span>(<\/span>)<\/span>;<\/span>\n\t\twritel<\/span>(<\/span>i,<\/span> rx_ring-><\/span>tail)<\/span>;<\/span>\n\t}<\/span>\n}<\/span>\n<\/code><\/pre>\n
          static<\/span> bool igb_alloc_mapped_page<\/span>(<\/span>struct<\/span> igb_ring<\/span> *<\/span>rx_ring,<\/span>\n\t\t\t\t  struct<\/span> igb_rx_buffer<\/span> *<\/span>bi)<\/span>\n{ \n   <\/span>\n\tstruct<\/span> page<\/span> *<\/span>page =<\/span> bi-><\/span>page;<\/span>\n\tdma_addr_t<\/span> dma;<\/span>\n\n\t\/* since we are recycling buffers we should seldom need to alloc *\/<\/span>\n\t\/\/\u5982\u679c reuse \u6210\u529f\uff0c\u90a3\u4e48\u5c31\u76f4\u63a5\u8df3\u51fa\u4e86<\/span>\n\tif<\/span> (<\/span>likely<\/span>(<\/span>page)<\/span>)<\/span>\n\t\treturn<\/span> true;<\/span>\n\n\t\/* alloc new page for storage *\/<\/span>\n\tpage =<\/span> alloc_page<\/span>(<\/span>GFP_ATOMIC |<\/span> __GFP_COLD)<\/span>;<\/span>\n\tif<\/span> (<\/span>unlikely<\/span>(<\/span>!<\/span>page)<\/span>)<\/span> { \n   <\/span>\n\t\trx_ring-><\/span>rx_stats.<\/span>alloc_failed++<\/span>;<\/span>\n\t\treturn<\/span> false;<\/span>\n\t}<\/span>\n\n\t\/* map page for use *\/<\/span>\n\tdma =<\/span> dma_map_page<\/span>(<\/span>rx_ring-><\/span>dev,<\/span> page,<\/span> 0<\/span>,<\/span> PAGE_SIZE,<\/span> DMA_FROM_DEVICE)<\/span>;<\/span>\n\n\t\/* * if mapping failed free memory back to system since * there isn't much point in holding memory we can't use *\/<\/span>\n\tif<\/span> (<\/span>dma_mapping_error<\/span>(<\/span>rx_ring-><\/span>dev,<\/span> dma)<\/span>)<\/span> { \n   <\/span>\n\t\t__free_page<\/span>(<\/span>page)<\/span>;<\/span>\n\n\t\trx_ring-><\/span>rx_stats.<\/span>alloc_failed++<\/span>;<\/span>\n\t\treturn<\/span> false;<\/span>\n\t}<\/span>\n\n\tbi-><\/span>dma =<\/span> dma;<\/span>\n\tbi-><\/span>page =<\/span> page;<\/span>\n\t\/\/\u521d\u59cb\u7684 offset \u4e3a 0<\/span>\n\tbi-><\/span>page_offset =<\/span> 0<\/span>;<\/span>\n\n\treturn<\/span> true;<\/span>\n}<\/span>\n<\/code><\/pre>\n

          \u4e09\u3001igb_fetch_rx_buffer<\/h2>\n

          \u63a5\u4e0b\u6765\u5c31\u770b igb_fetch_rx_buffer \u4e86\uff0ccode \u90e8\u5206\u5176\u5b9e Intel \u7684\u6ce8\u91ca\u633a\u5168\u9762\u7684\u3002<\/p>\n

          static<\/span> struct<\/span> sk_buff<\/span> *<\/span>igb_fetch_rx_buffer<\/span>(<\/span>struct<\/span> igb_ring<\/span> *<\/span>rx_ring,<\/span>\n\t\t\t\t\t   union<\/span> e1000_adv_rx_desc *<\/span>rx_desc,<\/span>\n\t\t\t\t\t   struct<\/span> sk_buff<\/span> *<\/span>skb)<\/span>\n{ \n   <\/span>\n\tstruct<\/span> igb_rx_buffer<\/span> *<\/span>rx_buffer;<\/span>\n\tstruct<\/span> page<\/span> *<\/span>page;<\/span>\n\n\t\/\/\u56e0\u4e3a\u8981\u56de\u6536 rx_buffer\uff0crx_buffer \u6307\u5411 next_to_clean<\/span>\n\trx_buffer =<\/span> &<\/span>rx_ring-><\/span>rx_buffer_info[<\/span>rx_ring-><\/span>next_to_clean]<\/span>;<\/span>\n\n\tpage =<\/span> rx_buffer-><\/span>page;<\/span>\n\tprefetchw<\/span>(<\/span>page)<\/span>;<\/span>\/\/__builtin_prefetch() \u662f gcc \u7684\u4e00\u4e2a\u5185\u7f6e\u51fd\u6570\u3002\u5b83\u901a\u8fc7\u5bf9\u6570\u636e\u624b\u5de5\u9884\u53d6\u7684\u65b9\u6cd5\uff0c\u51cf\u5c11\u4e86\u8bfb\u53d6\u5ef6\u8fdf\uff0c\u4ece\u800c\u63d0\u9ad8\u4e86\u6027\u80fd\uff0c\u4f46\u8be5\u51fd\u6570\u4e5f\u9700\u8981 CPU \u7684\u652f\u6301\u3002<\/span>\n\n\t\/\/\u770b\u770b\u8fd9\u4e2a ring \u7684 skb \u662f\u5426\u4e3a\u7a7a\uff0c\u5982\u679c\u4e3a\u7a7a\uff0c\u90a3\u4e48\u91cd\u65b0\u5206\u914d\u4e00\u4e2a skb \u7ed9\u5b83<\/span>\n\tif<\/span> (<\/span>likely<\/span>(<\/span>!<\/span>skb)<\/span>)<\/span> { \n   <\/span>\n\t\tvoid<\/span> *<\/span>page_addr =<\/span> page_address<\/span>(<\/span>page)<\/span> +<\/span>\n\t\t\t\t  rx_buffer-><\/span>page_offset;<\/span>\n\n\t\t\/* prefetch first cache line of first page *\/<\/span>\n\t\tprefetch<\/span>(<\/span>page_addr)<\/span>;<\/span>\n#<\/span>if<\/span> L1_CACHE_BYTES <<\/span> 128<\/span><\/span><\/span>\n\t\tprefetch<\/span>(<\/span>page_addr +<\/span> L1_CACHE_BYTES)<\/span>;<\/span>\n#<\/span>endif<\/span><\/span>\n\n\t\t\/* allocate a skb to store the frags *\/<\/span>\n\t\tskb =<\/span> netdev_alloc_skb_ip_align<\/span>(<\/span>rx_ring-><\/span>netdev,<\/span>\n\t\t\t\t\t\tIGB_RX_HDR_LEN)<\/span>;<\/span>\n\t\tif<\/span> (<\/span>unlikely<\/span>(<\/span>!<\/span>skb)<\/span>)<\/span> { \n   <\/span>\n\t\t\trx_ring-><\/span>rx_stats.<\/span>alloc_failed++<\/span>;<\/span>\n\t\t\treturn<\/span> NULL<\/span>;<\/span>\n\t\t}<\/span>\n\n\t\t\/* * we will be copying header into skb->data in * pskb_may_pull so it is in our interest to prefetch * it now to avoid a possible cache miss *\/<\/span>\n\t\tprefetchw<\/span>(<\/span>skb-><\/span>data)<\/span>;<\/span>\n\t}<\/span>\n\n\t\/* we are reusing so sync this buffer for CPU use *\/<\/span>\n\tdma_sync_single_range_for_cpu<\/span>(<\/span>rx_ring-><\/span>dev,<\/span>\n\t\t\t\t      rx_buffer-><\/span>dma,<\/span>\n\t\t\t\t      rx_buffer-><\/span>page_offset,<\/span>\n\t\t\t\t      IGB_RX_BUFSZ,<\/span>\n\t\t\t\t      DMA_FROM_DEVICE)<\/span>;<\/span>\n\n\t\/* pull page into skb *\/<\/span>\n\tif<\/span> (<\/span>igb_add_rx_frag<\/span>(<\/span>rx_ring,<\/span> rx_buffer,<\/span> rx_desc,<\/span> skb)<\/span>)<\/span> { \n   <\/span>\n\t\t\/* hand second half of page back to the ring *\/<\/span>\n\t\tigb_reuse_rx_page<\/span>(<\/span>rx_ring,<\/span> rx_buffer)<\/span>;<\/span>\n\t}<\/span> else<\/span> { \n   <\/span>\n\t\t\/* we are not reusing the buffer so unmap it *\/<\/span>\n\t\tdma_unmap_page<\/span>(<\/span>rx_ring-><\/span>dev,<\/span> rx_buffer-><\/span>dma,<\/span>\n\t\t\t       PAGE_SIZE,<\/span> DMA_FROM_DEVICE)<\/span>;<\/span>\n\t}<\/span>\n\n\t\/* clear contents of rx_buffer *\/<\/span>\n\t\/\/\u56e0\u4e3a\u6570\u636e\u5df2\u7ecf\u590d\u5236\u5230 skb \u91cc\u4e86\uff0c\u6240\u4ee5\u8fd9\u91cc\u628a page \u7ed9\u6e05\u7a7a\u4e86<\/span>\n\trx_buffer-><\/span>page =<\/span> NULL<\/span>;<\/span>\n\n\treturn<\/span> skb;<\/span>\n}<\/span>\n<\/code><\/pre>\n

          \u63a5\u4e0b\u6765\u770b igb_add_rx_frag\u3002<\/p>\n

          \/** * igb_add_rx_frag - Add contents of Rx buffer to sk_buff * @rx_ring: rx descriptor ring to transact packets on * @rx_buffer: buffer containing page to add * @rx_desc: descriptor containing length of buffer written by hardware * @skb: sk_buff to place the data into * * This function will add the data contained in rx_buffer->page to the skb. * This is done either through a direct copy if the data in the buffer is * less than the skb header size, otherwise it will just attach the page as * a frag to the skb. * \u6b64\u51fd\u6570\u4f1a\u5c06 rx_buffer->page \u4e2d\u5305\u542b\u7684\u6570\u636e\u6dfb\u52a0\u5230 skb\u3002 * \u5982\u679c\u7f13\u51b2\u533a\u4e2d\u7684\u6570\u636e\u5c0f\u4e8e skb \u6807\u5934\u5927\u5c0f\uff0c\u5219\u53ef\u4ee5\u901a\u8fc7\u76f4\u63a5\u590d\u5236\u6765\u5b8c\u6210\u6b64\u64cd\u4f5c\uff0c\u5426\u5219\u5b83\u5c06\u4ec5\u5c06\u9875\u9762\u4f5c\u4e3a\u788e\u7247\u9644\u52a0\u5230 skb\u3002 * * The function will then update the page offset if necessary and return * true if the buffer can be reused by the adapter. * \u5982\u679c\u9700\u8981\uff0c\u8be5\u51fd\u6570\u5c06\u66f4\u65b0\u9875\u9762\u504f\u79fb\u91cf\u3002\u5982\u679c adapter \u53ef\u4ee5\u91cd\u7528 buffer\uff0c\u5219\u8fd4\u56de true\u3002 **\/<\/span>\nstatic<\/span> bool igb_add_rx_frag<\/span>(<\/span>struct<\/span> igb_ring<\/span> *<\/span>rx_ring,<\/span>\n\t\t\t    struct<\/span> igb_rx_buffer<\/span> *<\/span>rx_buffer,<\/span>\n\t\t\t    union<\/span> e1000_adv_rx_desc *<\/span>rx_desc,<\/span>\n\t\t\t    struct<\/span> sk_buff<\/span> *<\/span>skb)<\/span>\n{ \n   <\/span>\n\tstruct<\/span> page<\/span> *<\/span>page =<\/span> rx_buffer-><\/span>page;<\/span>\n\tunsigned<\/span> char<\/span> *<\/span>va =<\/span> page_address<\/span>(<\/span>page)<\/span> +<\/span> rx_buffer-><\/span>page_offset;<\/span>\/\/virtual address<\/span>\n\tunsigned<\/span> int<\/span> size =<\/span> le16_to_cpu<\/span>(<\/span>rx_desc-><\/span>wb.<\/span>upper.<\/span>length)<\/span>;<\/span>\n#<\/span>if<\/span> (<\/span>PAGE_SIZE <<\/span> 8192<\/span>)<\/span><\/span>\/\/\u6211\u6240\u4f7f\u7528\u7684 kernel \u4e2d page size \u662f 4096<\/span><\/span>\n\tunsigned<\/span> int<\/span> truesize =<\/span> IGB_RX_BUFSZ;<\/span>\n#<\/span>else<\/span><\/span>\n\tunsigned<\/span> int<\/span> truesize =<\/span> SKB_DATA_ALIGN<\/span>(<\/span>size)<\/span>;<\/span>\n#<\/span>endif<\/span><\/span>\n\tunsigned<\/span> int<\/span> pull_len;<\/span>\n\n\tif<\/span> (<\/span>unlikely<\/span>(<\/span>skb_is_nonlinear<\/span>(<\/span>skb)<\/span>)<\/span>)<\/span>\n\t\tgoto<\/span> add_tail_frag;<\/span>\n\n#<\/span>ifdef<\/span> HAVE_PTP_1588_CLOCK<\/span><\/span>\n\tif<\/span> (<\/span>unlikely<\/span>(<\/span>igb_test_staterr<\/span>(<\/span>rx_desc,<\/span> E1000_RXDADV_STAT_TSIP)<\/span>)<\/span>)<\/span> { \n   <\/span>\n\t\tigb_ptp_rx_pktstamp<\/span>(<\/span>rx_ring-><\/span>q_vector,<\/span> va,<\/span> skb)<\/span>;<\/span>\n\t\tva +=<\/span> IGB_TS_HDR_LEN;<\/span>\n\t\tsize -=<\/span> IGB_TS_HDR_LEN;<\/span>\n\t}<\/span>\n#<\/span>endif<\/span> \/* HAVE_PTP_1588_CLOCK *\/<\/span><\/span>\n\n\tif<\/span> (<\/span>likely<\/span>(<\/span>size <=<\/span> IGB_RX_HDR_LEN)<\/span>)<\/span> { \n   <\/span>\n\t\tmemcpy<\/span>(<\/span>__skb_put<\/span>(<\/span>skb,<\/span> size)<\/span>,<\/span> va,<\/span> ALIGN<\/span>(<\/span>size,<\/span> sizeof<\/span>(<\/span>long<\/span>)<\/span>)<\/span>)<\/span>;<\/span>\n\n\t\t\/* we can reuse buffer as-is, just make sure it is local *\/<\/span>\n\t\tif<\/span> (<\/span>likely<\/span>(<\/span>page_to_nid<\/span>(<\/span>page)<\/span> ==<\/span> numa_node_id<\/span>(<\/span>)<\/span>)<\/span>)<\/span>\n\t\t\treturn<\/span> true;<\/span>\n\n\t\t\/* this page cannot be reused so discard it *\/<\/span>\n\t\tput_page<\/span>(<\/span>page)<\/span>;<\/span>\n\t\treturn<\/span> false;<\/span>\n\t}<\/span>\n\n\t\/* we need the header to contain the greater of either ETH_HLEN or * 60 bytes if the skb->len is less than 60 for skb_pad. *\/<\/span>\n\tpull_len =<\/span> eth_get_headlen<\/span>(<\/span>skb-><\/span>dev,<\/span> va,<\/span> IGB_RX_HDR_LEN)<\/span>;<\/span>\n\n\t\/* align pull length to size of long to optimize memcpy performance *\/<\/span>\n\tmemcpy<\/span>(<\/span>__skb_put<\/span>(<\/span>skb,<\/span> pull_len)<\/span>,<\/span> va,<\/span> ALIGN<\/span>(<\/span>pull_len,<\/span> sizeof<\/span>(<\/span>long<\/span>)<\/span>)<\/span>)<\/span>;<\/span>\n\n\t\/* update all of the pointers *\/<\/span>\n\tva +=<\/span> pull_len;<\/span>\n\tsize -=<\/span> pull_len;<\/span>\n\nadd_tail_frag:<\/span>\n\tskb_add_rx_frag<\/span>(<\/span>skb,<\/span> skb_shinfo<\/span>(<\/span>skb)<\/span>-><\/span>nr_frags,<\/span> page,<\/span>\n\t\t\t(<\/span>unsigned<\/span> long<\/span>)<\/span>va &<\/span> ~<\/span>PAGE_MASK,<\/span> size,<\/span> truesize)<\/span>;<\/span>\n\n\treturn<\/span> igb_can_reuse_rx_page<\/span>(<\/span>rx_buffer,<\/span> page,<\/span> truesize)<\/span>;<\/span>\n}<\/span>\n<\/code><\/pre>\n

          igb_can_reuse_rx_page \u5224\u65ad\u662f\u5426\u53ef\u4ee5 reuse \u7684\uff0c\u4ee5\u53ca\u5b9e\u73b0\u7ffb\u8f6c\u3002<\/p>\n

          static<\/span> bool igb_can_reuse_rx_page<\/span>(<\/span>struct<\/span> igb_rx_buffer<\/span> *<\/span>rx_buffer,<\/span>\n\t\t\t\t  struct<\/span> page<\/span> *<\/span>page,<\/span>\n\t\t\t\t  unsigned<\/span> int<\/span> truesize)<\/span>\n{ \n   <\/span>\n\t\/* avoid re-using remote pages *\/<\/span>\n\tif<\/span> (<\/span>unlikely<\/span>(<\/span>page_to_nid<\/span>(<\/span>page)<\/span> !=<\/span> numa_node_id<\/span>(<\/span>)<\/span>)<\/span>)<\/span>\n\t\treturn<\/span> false;<\/span>\n\n#<\/span>if<\/span> (<\/span>PAGE_SIZE <<\/span> 8192<\/span>)<\/span><\/span><\/span>\n\t\/* if we are only owner of page we can reuse it *\/<\/span>\n\t\/\/page \u521a\u5206\u914d\u7684\u65f6\u5019 page count \u662f 1\uff0c\u5982\u679c\u65e2\u88ab cpu \u4f7f\u7528\uff0c\u53c8\u88ab nic \u4f7f\u7528\uff0cpage count \u662f 2\u3002\u6240\u4ee5\u8fd9\u91cc\u5224\u65ad\u662f\u5426\u4e3a 1<\/span>\n\t\/\/\u53ea\u6709\u8fd9\u4e2a page \u5df2\u7ecf\u4e0d\u88ab nic \u4f7f\u7528\uff0c\u624d\u53ef\u4ee5 reuse<\/span>\n\tif<\/span> (<\/span>unlikely<\/span>(<\/span>page_count<\/span>(<\/span>page)<\/span> !=<\/span> 1<\/span>)<\/span>)<\/span>\n\t\treturn<\/span> false;<\/span>\n\n\t\/* flip page offset to other buffer *\/<\/span>\n\t\/\/\u5c06\u9875\u9762\u504f\u79fb\u91cf\u7ffb\u8f6c\u5230\u5176\u4ed6\u7f13\u51b2\u533a\uff0c\u8fd9\u90e8\u5206\u662f\u6211\u4e00\u5f00\u59cb\u6bd4\u8f83\u4e0d\u660e\u767d\u7684\u4e00\u70b9\uff0c\u540e\u7eed\u6211\u4f1a\u901a\u8fc7\u753b\u56fe\u6765\u89e3\u91ca\u4e00\u4e0b\u3002<\/span>\n\t\/\/\u8fd9\u91cc\u7684 IGB_RX_BUFSZ = 2048<\/span>\n\trx_buffer-><\/span>page_offset ^=<\/span> IGB_RX_BUFSZ;<\/span>\n#<\/span>else<\/span><\/span>\n\t\/* move offset up to the next cache line *\/<\/span>\n\trx_buffer-><\/span>page_offset +=<\/span> truesize;<\/span>\n\n\tif<\/span> (<\/span>rx_buffer-><\/span>page_offset ><\/span> (<\/span>PAGE_SIZE -<\/span> IGB_RX_BUFSZ)<\/span>)<\/span>\n\t\treturn<\/span> false;<\/span>\n#<\/span>endif<\/span><\/span>\n\n\t\/* bump ref count on page before it is given to the stack *\/<\/span>\n\t\/\/ref count +1 \u540e\u9001\u7ed9\u534f\u8bae\u6808\uff0c\u534f\u8bae\u6808\u7528\u5b8c\u4e86\u4f1a -1<\/span>\n\tget_page<\/span>(<\/span>page)<\/span>;<\/span>\n\n\treturn<\/span> true;<\/span>\n}<\/span>\n\n<\/code><\/pre>\n

          igb_reuse_rx_page \u5c31\u662f reuse \u7684\u51fd\u6570\uff0c\u8fd9\u90e8\u5206\u4e5f\u5c31\u662f\u628a\u65e7\u7684 page \u7684\u53cd\u9762\u8d4b\u7ed9 next_to_alloc\uff0c\u8fd9\u65f6\u5019\uff0coffset \u7684\u503c\u5df2\u7ecf\u6539\u53d8\uff0c\u5373\u9875\u9762\u5df2\u7ecf\u7ffb\u8f6c\u4e86\uff0c\u4e5f\u5c31\u662f\u8bf4\uff0c\u6210\u529f\u5730\u5c06\u53ef\u4ee5\u4f7f\u7528\u7684 buffer \u4ece next_to_clean \u7ed9\u5230\u4e86 next_to_alloc\u3002<\/p>\n

          \/** * igb_reuse_rx_page - page flip buffer and store it back on the ring * @rx_ring: rx descriptor ring to store buffers on * @old_buff: donor buffer to have page reused * * Synchronizes page for reuse by the adapter **\/<\/span>\nstatic<\/span> void<\/span> igb_reuse_rx_page<\/span>(<\/span>struct<\/span> igb_ring<\/span> *<\/span>rx_ring,<\/span>\n\t\t\t      struct<\/span> igb_rx_buffer<\/span> *<\/span>old_buff)<\/span>\n{ \n   <\/span>\n\tstruct<\/span> igb_rx_buffer<\/span> *<\/span>new_buff;<\/span>\n\tu16 nta =<\/span> rx_ring-><\/span>next_to_alloc;<\/span>\n\n\tnew_buff =<\/span> &<\/span>rx_ring-><\/span>rx_buffer_info[<\/span>nta]<\/span>;<\/span>\n\n\t\/* update, and store next to alloc *\/<\/span>\n\t\/\/nta \u7684 update \u5728\u540e<\/span>\n\tnta++<\/span>;<\/span>\n\trx_ring-><\/span>next_to_alloc =<\/span> (<\/span>nta <<\/span> rx_ring-><\/span>count)<\/span> ?<\/span> nta :<\/span> 0<\/span>;<\/span>\n\n\t\/* transfer page from old buffer to new buffer *\/<\/span>\n\t\/\/\u8fd9\u91cc\u7684 old buffer \u662f next_to_clean \u5bf9\u5e94\u7684 page \u53cd\u8f6c\u540e\u5bf9\u5e94\u7684 buffer<\/span>\n\t*<\/span>new_buff =<\/span> *<\/span>old_buff;<\/span>\n\n\t\/* sync the buffer for use by the device *\/<\/span>\n\tdma_sync_single_range_for_device<\/span>(<\/span>rx_ring-><\/span>dev,<\/span> old_buff-><\/span>dma,<\/span>\n\t\t\t\t\t old_buff-><\/span>page_offset,<\/span>\n\t\t\t\t\t IGB_RX_BUFSZ,<\/span>\n\t\t\t\t\t DMA_FROM_DEVICE)<\/span>;<\/span>\n}<\/span>\n<\/code><\/pre>\n

          \u56db\u3001\u7ffb\u8f6c\u9875\u9762<\/h2>\n

          page reuse \u6700\u6838\u5fc3\u7684\u90e8\u5206\u5c31\u662f\u7ffb\u8f6c\u9875\u9762\uff08\u5373\u4e0a\u9762\u63d0\u5230\u7684 flip page offset to other buffer\uff09\u3002\u8fd9\u662f\u4ec0\u4e48\u610f\u601d\u5462\uff1fPAGE_SIZE \u4e00\u822c\u4e3a 4096\uff0cpage->offset \u7684\u521d\u59cb\u503c\u4e3a 0\u3002\u4e0b\u9762\u8fd9\u53e5\u4ee3\u7801\u6267\u884c\u5f02\u6216\u8fd0\u7b97\uff0c\u800cIGB_RX_BUFSZ \u7684\u503c\u4e3a 2048\u3002\u90a3\u4e48 page->offset \u7684\u503c\u5c31\u5728 0 \u548c 2048 \u4e4b\u95f4\u6765\u56de\u53d8\u6362\uff0c\u5373\u6709\u4e24\u4e2a\u7f13\u5b58\u533a\uff0c0 ~ 2047\u30012048 ~ 4095\u3002<\/p>\n

          \/* flip page offset to other buffer *\/<\/span>\nrx_buffer-><\/span>page_offset ^=<\/span> IGB_RX_BUFSZ;<\/span>\n<\/code><\/pre>\n

          \u5982\u4e0b\u56fe\u6240\u793a\uff0c\u56fe\u4e2d\u9634\u5f71\u90e8\u5206\u8868\u793a\u7f13\u5b58\u533a 0 ~ 2047\uff0c\u7a7a\u767d\u90e8\u5206\u4e3a 2048 ~ 4095\u3002\u8fd9\u6837\u7684\u8f6c\u6362\u662f\u4e00\u79cd ping-pong \u673a\u5236\uff0c\u53ef\u4ee5\u79f0\u4e4b\u4e3a ping-pong page\u3002\u5728\u672c\u6587\u4e2d\u6211\u79f0\u8fd9\u79cd\u673a\u5236\u4e3a page \u7684\u6b63\u53cd\u9762\uff0c\u5f53\u7136 page \u662f\u6ca1\u6709\u6b63\u53cd\u9762\u7684\uff0c\u53ea\u662f\u4e00\u79cd\u5e2e\u52a9\u7406\u89e3\u7684\u53eb\u6cd5\u800c\u5df2\u3002
          \"\u7f51\u5361\u9a71\u52a8\u6536\u5305\u4ee3\u7801\u5206\u6790\u4e4b
          \u5728\u6b64\u57fa\u7840\u4e0a\uff0cigb_can_reuse_rx_page \u4e2d\u5224\u65ad\u662f\u5426\u53ef\u4ee5 reuse page\uff0c\u5982\u679c\u4e0d\u53ef\u4ee5 reuse \u90a3\u4e48\u5c31\u91cd\u65b0alloc\u3002\u8fd9\u91cc\u9762\u7684\u4e00\u79cd\u60c5\u51b5\u662f\uff0cpage \u7684\u4e00\u9762\u4ecd\u88ab\u4f7f\u7528\uff0c\u5c31\u4e0d\u53ef\u4ee5 reuse page\uff0c\u90a3\u4e48\u5c31\u9700\u8981\u91cd\u65b0 alloc \u4e00\u4e2a page \u7ed9 next_to_alloc\u3002<\/p>\n

          \u4e94\u3001\u6d41\u7a0b\u56fe\u793a<\/h2>\n

          1. \u4e3a desc alloc buffer<\/h3>\n

          igb_configure => igb_alloc_rx_buffers\u3002<\/p>\n

          \t\/* call igb_desc_unused which always leaves * at least 1 descriptor unused to make sure * next_to_use != next_to_clean *\/<\/span>\n\t \/\/\u8fd9\u91cc\u8981\u4fdd\u8bc1\u59cb\u7ec8\u6709\u4e00\u4e2a desc \u662f\u672a\u66fe\u88ab\u4f7f\u7528\u7684\uff0c\u5c31\u662f\u4e0b\u56fe\u7684 255<\/span>\n\tfor<\/span> (<\/span>i =<\/span> 0<\/span>;<\/span> i <<\/span> adapter-><\/span>num_rx_queues;<\/span> i++<\/span>)<\/span> { \n   <\/span>\n\t\tstruct<\/span> igb_ring<\/span> *<\/span>ring =<\/span> adapter-><\/span>rx_ring[<\/span>i]<\/span>;<\/span>\n\t\tigb_alloc_rx_buffers<\/span>(<\/span>ring,<\/span> igb_desc_unused<\/span>(<\/span>ring)<\/span>)<\/span>;<\/span>\n\t}<\/span>\n<\/code><\/pre>\n

          \u8fd9\u91cc\u4e3a num_rx_desc \u4e2a desc \u5206\u914d\u4e86 page\uff0ccode \u6267\u884c\u5b8c\u6bd5\u540e\uff0cnext_to_alloc = next_to_use = num_rx_desc -1\uff0cnext_to_clean = 0\u3002
          \"\u7f51\u5361\u9a71\u52a8\u6536\u5305\u4ee3\u7801\u5206\u6790\u4e4b<\/p>\n

          2. page reuse ok<\/h3>\n

          2.1 \u4e00\u4e2a packet\uff0c\u4e00\u4e2a desc<\/h4>\n

          \u8fd9\u91cc\u4e3a\u4e86\u5e2e\u52a9\u7406\u89e3\uff0c\u8ba9\u6211\u4eec\u4ece\u5934\u5f00\u59cb\uff0c\u7b2c\u4e00\u6b21\u6536\u5230\u4e00\u4e2a\u5305\uff0c\u4e14\u53ea\u7528\u5230\u4e00\u4e2a desc\uff0c\u7136\u540e\u5f00\u59cb\u5904\u7406\u3002
          \uff081\uff09 cleaned_count = 0\uff0c\u90a3\u4e48\u6b64\u5904\u4e0d\u7528 alloc rx buffer\u3002
          \uff082\uff09igb_fetch_rx_buffer\uff0c\u7ecf\u8fc7\u8fd9\u91cc\uff0cnext_to_clean \u7684 page \u7ffb\u8f6c\u540e\u7ed9\u5230\u4e86 next_to_alloc\uff0cnext_to_alloc = 0\u3002\u8fd9\u91cc\u662f\u628a desc 0 \u7684 page \u7684\u53cd\u9762\u7ed9 desc 255\u3002\u6ce8\u610f\uff0c\u8fd9\u91cc desc 0 \u7684 page \u5df2\u7ecf\u88ab\u6e05\u7a7a\u4e86\u3002
          \uff083\uff09cleaned_count = 1\u3002
          \uff084\uff09igb_is_non_eop\uff0cnext_to_clean = 1\u3002
          \uff085\uff09cleaned_count = 1\uff0calloc buffer\uff0c\u56e0\u4e3a desc 255 \u5df2\u7ecf\u88ab next_to_clean \u8d4b\u4e88\u4e86 desc 0 \u7684 page\uff0c\u56e0\u6b64\u4e0d\u9700\u8981 alloc page\uff0cnext_to_alloc = next_to_use = 0\u3002
          \"\u7f51\u5361\u9a71\u52a8\u6536\u5305\u4ee3\u7801\u5206\u6790\u4e4b<\/p>\n

          2.2 \u4e00\u4e2a packet\uff0c\u591a\u4e2a desc<\/h4>\n

          \u591a\u4e2a desc \u7684\u60c5\u51b5\u548c\u4e00\u4e2a desc \u662f\u7c7b\u4f3c\u7684\uff0c\u4ecd\u7136\u662f\u4e00\u4e2a\u5305\u3002
          \uff081\uff09 cleaned_count = 0\uff0c\u90a3\u4e48\u6b64\u5904\u4e0d\u7528 alloc rx buffer\u3002
          \uff082\uff09igb_fetch_rx_buffer\uff0c\u7ecf\u8fc7\u8fd9\u91cc\uff0cnext_to_clean \u7684 page \u7ffb\u8f6c\u540e\u7ed9\u5230\u4e86 next_to_alloc\uff0cnext_to_alloc = 0\u3002\u8fd9\u91cc\u662f\u628a desc 0 \u7684 page \u7684\u53cd\u9762\u7ed9 desc 255\u3002\u6ce8\u610f\uff0c\u8fd9\u91cc desc 0 \u7684 page \u5df2\u7ecf\u88ab\u6e05\u7a7a\u4e86\u3002
          \uff083\uff09cleaned_count = 1\u3002
          \uff084\uff09igb_is_non_eop\uff0cnext_to_clean = 1\u3002\u56e0\u4e3a\u4e0d\u662f end of packet\uff0c\u6240\u4ee5 continue\u3002
          \uff085\uff09\u91cd\u590d\u4e09\u6b21\uff1a\u7b2c\u4e00\u6b21\u91cd\u590d\uff0c\u628a desc 1 \u7684 page \u53cd\u9762\u7ed9\u5230 desc 0\uff0cnext_to_alloc = 1\uff0ccleaned_count = 2\uff0cnext_to_clean = 2\uff1b\u7b2c\u4e8c\u6b21\u91cd\u590d\uff0c\u628a desc 2page \u7684\u53cd\u9762\u7ed9\u5230 desc 1\uff0cnext_to_alloc = 2\uff0ccleaned_count = 3\uff0cnext_to_clean = 3\uff1b\u7b2c\u4e09\u6b21\u91cd\u590d\uff0c\u628a desc 3page \u7684\u53cd\u9762\u7ed9\u5230 desc 2\uff0cnext_to_alloc = 3\uff0ccleaned_count = 4\uff0cnext_to_clean = 4\u3002
          \uff086\uff09cleaned_count = 4\uff0crx alloc buffer\uff0c\u4f46\u662f\u56e0\u4e3a desc 255\u30010\u30011\u30012\uff0c\u90fd\u662f\u6709\u5bf9\u5e94\u7684 page \u7684\uff0c\u6240\u4ee5\u4e0d\u9700\u8981 alloc\uff0cnext_to_use = 3\u3002
          \"\u7f51\u5361\u9a71\u52a8\u6536\u5305\u4ee3\u7801\u5206\u6790\u4e4b<\/p>\n

          3. page reuse fail<\/h3>\n

          3.1 \u4e00\u4e2a packet\uff0c\u4e00\u4e2a desc<\/h4>\n

          \u8fd9\u91cc\u4e3a\u4e86\u5e2e\u52a9\u7406\u89e3\uff0c\u8ba9\u6211\u4eec\u4ece\u5934\u5f00\u59cb\uff0c\u7b2c\u4e00\u6b21\u6536\u5230\u4e00\u4e2a\u5305\uff0c\u4e14\u53ea\u7528\u5230\u4e00\u4e2a desc\uff0c\u7136\u540e\u5f00\u59cb\u5904\u7406\u3002
          \uff081\uff09cleaned_count = 0\uff0c\u90a3\u4e48\u6b64\u5904\u4e0d\u7528 alloc rx buffer\u3002
          \uff082\uff09igb_fetch_rx_buffer\uff0c\u8fd9\u91cc dma_unmap_page\uff0cdesc 0 page != NULL\u3002
          \uff083\uff09cleaned_count =1\u3002
          \uff084\uff09igb_is_non_eop\uff0cnext_to_clean = 1\u3002
          \uff085\uff09cleaned_count =1\uff0calloc rx buffer\uff0cigb_alloc_mapped_page \u8fd9\u91cc\u56e0\u4e3a desc 255 \u7684 page \u662f\u6ca1\u6709\u5206\u914d\u7684\uff0c\u6240\u4ee5\u5206\u914d\u4e00\u4e2a page\uff0cnext_to_alloc = next_to_use = 0\u3002
          \"\u7f51\u5361\u9a71\u52a8\u6536\u5305\u4ee3\u7801\u5206\u6790\u4e4b<\/p>\n

          3.2 \u4e00\u4e2a packet\uff0c\u591a\u4e2a desc<\/h4>\n

          \u7136\u540e\u662f\u591a\u4e2a desc\uff0c\u4e00\u4e2a packet \u7684\u60c5\u51b5\u3002\u5047\u8bbe\u662f\u56db\u4e2a desc\uff0c\u4e14 desc 2 \u6240\u5bf9\u5e94\u7684 page \u65e0\u6cd5\u88ab reuse\u3002
          \uff081\uff09cleaned_count = 0\uff0c\u90a3\u4e48\u6b64\u5904\u4e0d\u7528alloc rx buffer\u3002
          \uff082\uff09igb_fetch_rx_buffer\uff0c\u7ecf\u8fc7\u8fd9\u91cc\uff0cnext_to_clean \u7684 page \u7ffb\u8f6c\u540e\u7ed9\u5230\u4e86 next_to_alloc\uff0cnext_to_alloc = 0\u3002\u8fd9\u91cc\u662f\u628a desc 0 \u7684 page \u7684\u53cd\u9762\u7ed9 desc 255\u3002\u6ce8\u610f\uff0c\u8fd9\u91cc desc 0 \u7684 page \u5df2\u7ecf\u88ab\u6e05\u7a7a\u4e86\u3002
          \uff083\uff09cleaned_count = 1\u3002
          \uff084\uff09igb_is_non_eop\uff0cnext_to_clean = 1\u3002\u56e0\u4e3a\u4e0d\u662f end of packet\uff0c\u6240\u4ee5 continue\u3002
          \uff085\uff09igb_fetch_rx_buffer\uff0c\u7ecf\u8fc7\u8fd9\u91cc\uff0cnext_to_clean \u7684 page \u7ffb\u8f6c\u540e\u7ed9\u5230\u4e86 next_to_alloc\uff0cnext_to_alloc = 1\u3002\u8fd9\u91cc\u662f\u628a desc 1 \u7684 page \u7684\u53cd\u9762\u7ed9 desc 0\u3002\u6ce8\u610f\uff0c\u8fd9\u91cc desc 1 \u7684 page \u5df2\u7ecf\u88ab\u6e05\u7a7a\u4e86\u3002
          \uff086\uff09cleaned_count = 2\u3002
          \uff087\uff09igb_is_non_eop\uff0cnext_to_clean = 2\u3002\u56e0\u4e3a\u4e0d\u662f end of packet\uff0c\u6240\u4ee5 continue\u3002
          \uff088\uff09igb_fetch_rx_buffer\uff0cdesc 2 \u7684 page \u662f\u65e0\u6cd5 reuse \u7684\uff0c\u56e0\u6b64 dma_unmap_page\uff0cdesc 2 page != NULL\u3002
          \uff089\uff09cleaned_count = 3\u3002
          \uff0810\uff09igb_is_non_eop\uff0cnext_to_clean = 3\u3002\u56e0\u4e3a\u4e0d\u662f end of packet\uff0c\u6240\u4ee5 continue\u3002
          \uff0811\uff09igb_fetch_rx_buffer\uff0c\u7ecf\u8fc7\u8fd9\u91cc\uff0cnext_to_clean \u7684 page \u7ffb\u8f6c\u540e\u7ed9\u5230\u4e86 next_to_alloc\uff0cnext_to_alloc = 2\u3002\u8fd9\u91cc\u662f\u628a desc 3 \u7684 pag e\u7684\u53cd\u9762\u7ed9\u4e86 desc 1\u3002\u6ce8\u610f\uff0c\u8fd9\u91cc desc 3 \u7684 page \u5df2\u7ecf\u88ab\u6e05\u7a7a\u4e86\u3002
          \uff0812\uff09cleaned_count = 4\u3002
          \uff0813\uff09igb_is_non_eop\uff0cnext_to_clean = 4\u3002\u662f end of packet\u3002
          \uff0814\uff09cleaned_count = 4\uff0calloc rx buffer\uff0c\u5bf9\u4e8e desc 255\u30010\u30011\u30012\uff0c\u9664\u4e86 desc 2 \u662f\u7a7a\u5916\uff0c\u53e6\u5916\u4e09\u4e2a\u90fd\u662f\u6709 page \u7684\uff0c\u6240\u4ee5\u53ea\u9700\u8981 alloc \u4e00\u4e2a page \u7ed9 desc 2\u3002\u540e\u7eed next_to_alloc = next_to_clean = 3\u3002
          \"\u7f51\u5361\u9a71\u52a8\u6536\u5305\u4ee3\u7801\u5206\u6790\u4e4b<\/p>\n

          \u516d\u3001\u603b\u7ed3\u8bed<\/h2>\n

          \u81f3\u6b64\uff0cpage reuse \u5df2\u7ecf\u8bb2\u5b8c\u4e86\uff0c\u5176\u5b9e\u4e00\u5f00\u59cb\u6211\u53ea\u6253\u7b97\u4ecb\u7ecd igb_fetch_rx_buffer\uff0c\u4f46\u662f\u603b\u611f\u89c9\u6709\u4e9b\u96be\u53d7\uff0c\u6240\u4ee5\u540e\u7eed\u53c8\u8865\u5145\u4e86 Rx \u7684\u5176\u4ed6\u4e00\u4e9b\u5206\u6790\uff0c\u5e0c\u671b\u5927\u5bb6\u80fd\u591f\u6ee1\u610f\u3002<\/p>\n

          \u5982\u679c\u89c9\u5f97\u8fd9\u7bc7\u6587\u7ae0\u6709\u7528\u7684\u8bdd\uff0c\u53ef\u4ee5\u70b9\u8d5e\u3001\u8bc4\u8bba\u6216\u8005\u6536\u85cf\uff0c\u4e07\u5206\u611f\u8c22\uff0cgoodbye~<\/p>\n","protected":false},"excerpt":{"rendered":"\u7f51\u5361\u9a71\u52a8\u6536\u5305\u4ee3\u7801\u5206\u6790\u4e4b page reuse\u6700\u8fd1\u5728\u5b66\u4e60Intel\u7684igbkerneldriver\u7684Rxpagereuse\u90e8\u5206\uff0c\u5b66\u4e60\u7ed3\u675f\u4f5c\u4e00\u4e2a\u603b\u7ed3","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[],"tags":[],"_links":{"self":[{"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/posts\/9156"}],"collection":[{"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/comments?post=9156"}],"version-history":[{"count":0,"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/posts\/9156\/revisions"}],"wp:attachment":[{"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/media?parent=9156"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/categories?post=9156"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/mushiming.com\/wp-json\/wp\/v2\/tags?post=9156"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}