I am creating a very high performance Linux server (based on Apollo, non blocking sockets, and async disk IO [Based on Io_submit / io_getevents / eventfd)). Some of my standards tell me that the way I handle sockets is not enough for my needs, especially, I'm concerned about receiving data from a network card with a userspace buffer and receiving data on the userspace buffer from the network card ( For now ignore the sendfile call).
Calling on reading / writing on non-blocked Linux sockets, which I think is not completely asynchronous - this system makes call blocks while copying buffers from userspace to the kernel (or on the other side) , And then only get returns. Is there any way to avoid this thing in Linux? Specifically, is there a completely asynchronous writing call that I can create on a socket which will return immediately, DMA will set userbus buffer as essentially a network card, and set event / issue upon completion? I know that this is an interface for Windows, but I did not find anything about it in Linux.
Thank you!
There was some discussion about providing an API with some lines recently on the Linux kernel , But the sticking point is that you can not do DMA from the common user-space buffers with the network card because:
- Userspace looks like infected data in linear address space, probably not in physical memory , Which is a problem if the network card collects Skeeter-DMA Does not;
- On most machines, all physical memory addresses are not "DMA-enabled"
On recent colonies, you can see the vmsplice
and Can try to use. You want to send -
vmsplice
pages (with SPLICE_F_GIFT
) in pipes, then pairing
with them in the socket ( With SPLICE_F_MOVE
).
No comments:
Post a Comment