-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Proxy: Implement tun raw network interface inbound support for Linux #5464
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
明明有文档站为什么要单独写个README.md |
|
Current README explains how the feature is intended to be used. |
…ault implementation
|
它那么长似乎有的地方不是面向开发而是像文档面向最终用户的? |
|
Yes, README included is more an explanation for a user how to utilise the feature. |
|
可以解释一下 ConnectionHandler 是做什么的吗 我好像没有看到有关它的逻辑 |
|
"ConnectionHandler" is an interface with only one function "HandleConnection". gvisor call stack function every time there is new connection (TCP SYN packet or UDP packet that doesn't belong to any stream), stack function map gvisor connection to simple golang net.Conn and pass it to the HandlerConnection. HandleConnection receive net.Conn and destination as input, and simply pass it to the app dispatcher as new connection with the destination. At this stage the net.Conn stream is no different to stream received by any other proxy implementation as result of connect to the proxy. Gvisor is the connection that bridges network packets and net.Conn streams the usage of this function is on stack_gvisor.go#89 |
|
我的意思是 没有其他抽象逻辑 为什么要声明一个接口 |
|
In case there could be other implementations of the stack, than gvisor. |
|
tun.LinuxTun 最后可以被正确调用Close()吗 |
|
That's a legit thought. Thank you for that. |
|
I reviewed the flow, and it matches other input implementations: there is no "close" that signal proxy input/output handler to finish, so after it is initialised it is never going to shutdown. There is no closing of tun device in wireguard implementation, or any socket cleanup in other implementations. |
这就是 Xray-core 需要的 TUN Linux 上其实 TPROXY 性能比 TUN 更好,所以如果能把 Windows TUN 一起实现就更好了, 其实为啥我迟迟不动工 Windows TUN, 最有希望的是 eBPF on Windows 不过还在测试版又要钱去实名签名,所以我决定两个都要,先整个 wintun.dll 吧 @Owersun |
|
Thank you for kind words. I did omit windows implementation for the same reason you have mentioned. It's complicated, it will require external wuntun.dll, and with all that complexity it barely is going to be used. tun really shines for forwarded traffic, which most of the time is router setup, 99% of which is a linux box. Windows tun implementation will be used by like 1% of enthusiasts and will add 80% of the code to implement. Really bad trade off. Although I made the code extendable enough that it can be added later if there are really going to be people asking. Just don't think initial version must have it. |
|
Windows TUN是无和有的区别 而Linux TUN vs Linux tproxy 是性能和配置是否麻烦的事( |
|
Sure, I'll have a look how other apps does that for Windows. |
not really, for example sing-box also has a tun interface and it's actively used on Windows in many GUI proxy clients. Having a cross-platform tun interface would be very nice for xray, because there will be no need for double setup of xray <--> sing-box or tun2proxy for system-wide tunneling on windows |
|
Sure thing. As I said - this is initial implementation I made as MVP (minimal viable product). I tried to do it as clean as possible, so that the idea is clear. And extendable at the same time. |
|
Thanks @Owersun for your great work! Currently, we need to spin off a separate tun2socks process like the following https://github.com/2dust/v2rayNG/blob/master/V2rayNG/app/src/main/java/com/v2ray/ang/service/Tun2SocksService.kt#L35 |
|
To make it work on Android it require literally 1 line change. |
|
Guys, I have some good news. I try to pass the Android VPN service fd to the core and it is working with initial testing on v2rayNG! This is an important step towards one-core-fits-all and it will simplify many use cases. My suggestion is that we accept the PR now and work on Windows and other improvements later. @RPRX @Fangliding |
|
它的效率能比hev高吗( |
|
I'm glad that it turned out to be easily usable, that was the whole idea. With Windows I honestly done with the implementation, but it just look horrible... With external wintun.dll required (all other apps wireguard-go/sing-tun/etc. do that the same way), with problems with ipv6, and with a lot internal memory allocation (although this doesn't affect speed much). |
效率是一码事 全 go stack 是另一码事 所以 v2rayNG 搞了可选的 tun (现在是俩选项 我准备加第三个) |
|
Android编译是后来加的 其他安卓客户端用的是 Linux arm64编译吧 不知道这样会不会有问题 |
如果能实现,v2rayNG 将移除 badvpn-tun2socks ,只保留 xray tun 和 hev tun 即可 |
This is implementation of tun network L3 interface as input to the app.
There is README.md in the folder, explaining how the feature works.
Worth to mention on the implementation itself:
This is extremely oversimplified (not in functionality, rather intentionally without excessive complexity) implementation.
There is Linux support only (but the implementation allow to add support for other OS later, if needed). Most probable use case of this feature are router boxes, which mostly are linux based devices.
There are no internal app configuration ways to manage the interface, as network interface is OS level entity. The complication of double routing table or ip rules, or other ways this should be enhanced to work properly, should be managed by OS, to ensure proper integrity with network state of the system. This is explicit decision based on how many different things there are you could do with a network interface in Linux, and adding all of that to be configurable through the app is excessive.
No external additional libraries used, the whole ip stack is gvisor lib, already existing in the app. Tun interface itself is just a file in the system.
OS Level optimisations like GRO/GSO are intentionally disabled, as passing through traffic (forwarded through the interface) anyway is not subject for it. Implementing and always checking and accounting for possible GRO/GSO tables affect performance, not enhance it, in the configuration as a router device. There is very-very-very slim possible advantage for traffic originating from the router itself, which will gain like 0.1% real life performance, but will need like 80% more code to support it.
There were several tests done with different scenarios, all of them used VRAY-XTLS-Reality as uplink.
Normal browsing works just fine, TLS sniffing also works, no issues with that.
Ssh through the interface also worked without any issues, no delays, not lag.
Torrents work just fine.
In one case, test subject managed to run another IPSec (UDP) based VPN on top of that, connecting through Xray, through IPSec commercial VPN to different locations, and then using VoIP and video conferencing apps on top of that, joining several meetings. All that from the country where IPSec VPN is under restricted ban.
I honestly didn't come up with more cases I wanna try after that worked.
With my router based on mediatek mt7986a (banana-pi-r3) I was not able to find traffic top with my 100Mb uplink connection, services like speedtest always load it up to the top.
Although I expect the numbers will not be so extreme when many connections open and close. The cpu profile shows that cpu spikes on connection establishing (routing through the app, forward to uplink and so on), and then it has no problem for traffic flow through same running connection.
All in all, this is very similar implementation to any standalone tun-socks proxy there is, just without excessive complexity, and without necessary app-to-app connection in between, passing packets, converted to connection streams, from the network directly to the app core.