2024 年最快的 Rust Web 开发框架

导读:随着 Rust 语言越来越受欢迎,选择正确的 Web 框架至关重要。

本文使用“Hello World”基准比较Actix、Axum、Rocket、Tide、Gotham、Nickel、Ntex和Poem的性能。

这份简单的测试只是开始,根据兴趣,我们将深入研究更复杂的场景,例如静态文件服务和 JSON 处理。

这涉及到创建一个小型的 Web 服务器来响应“Hello, World!” 传入 HTTP 请求的消息。

这些基准测试是在配备 32GB RAM 和 6 核 Intel i9 的相同一台 MacBook Pro 2018 机器上进行的,每个框架都对其速度、资源使用情况和实施难易程度进行了评估。

负载测试器是Apache Bench,我们将执行构建发布。

每个应用程序的代码如下:

[package]
name = "actix"
version = "0.1.0"
edition = "2021"

[dependencies]
actix-web = "4"
use actix_web::{get, App, HttpResponse, HttpServer, Responder};

#[get("/")]
async fn hello() -> impl Responder {
    HttpResponse::Ok().body("Hello world!")
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| {
        App::new()
            .service(hello)
    })
    .bind(("127.0.0.1", 8080))?
    .run()
    .await
}

Axum

[package]
name = "axum-hello"
version = "0.1.0"
edition = "2021"

[dependencies]
axum = "0.7.3"
tokio = { version = "1.0", features = ["full"] }
use axum::{response::Html, routing::get, Router};

#[tokio::main]
async fn main() {
    // build our application with a route
    let app = Router::new().route("/", get(handler));

    // run it
    let listener = tokio::net::TcpListener::bind("127.0.0.1:8080")
        .await
        .unwrap();
    println!("listening on {}", listener.local_addr().unwrap());
    axum::serve(listener, app).await.unwrap();
}

async fn handler() -> Html<&'static str> {
    Html("Hello world!")
}

Rocket

[package]
name = "rocket-hello"
version = "0.1.0"
edition = "2021"

[dependencies]
rocket = "0.5.0"
#[macro_use] extern crate rocket;

#[get("/")]
fn hello() -> String {
    format!("Hello world!")
}

#[launch]
fn rocket() -> _ {

    let config = rocket::Config {
        port: 8080,
        log_level: rocket::config::LogLevel::Off,
        ..rocket::Config::debug_default()
    };

    rocket::custom(&config)
        .mount("/", routes![hello])

}

Tide

[package]
name = "tide-hello"
version = "0.1.0"
edition = "2021"

[dependencies]
tide = "0.16.0"
async-std = { version = "1.8.0", features = ["attributes"] }
#[async_std::main]
async fn main() -> Result<(), std::io::Error> {

    let mut app = tide::new();

    app.at("/").get(|_| async { Ok("Hello world!") });
    app.listen("127.0.0.1:8080").await?;

    Ok(())
}

Gotham

[package]

name = "gotham-hello"

version = "0.1.0"

edition = "2021"



[dependencies]

gotham = "0.7.2"

use gotham::state::State;



pub fn say_hello(state: State) -> (State, &'static str) {

    (state, "Hello world!")

}



/// Start a server and call the `Handler` we've defined above for each `Request` we receive.

pub fn main() {



    gotham::start("127.0.0.1:8080", || Ok(say_hello)).unwrap()



}

Ntex

[package]

name = "ntex-hello"

version = "0.1.0"

edition = "2021"



[dependencies]

ntex = { version= "0.7.16", features = ["tokio"] }

use ntex::web;



#[web::get("/")]

async fn index() -> impl web::Responder {

    "Hello, World!"

}



#[ntex::main]

async fn main() -> std::io::Result<()> {



    web::HttpServer::new(|| 

        web::App::new()

        .service(index)

    )

        .bind(("127.0.0.1", 8080))?

        .run()

        .await

}

POEM

[package]

name = "poem-hello"

version = "0.1.0"

edition = "2021"



[dependencies]

poem = "1.3.59"

tokio = { features = ["rt-multi-thread", "macros"] }

use poem::{

    get, handler, listener::TcpListener, middleware::Tracing, EndpointExt, Route, Server,

};



#[handler]

fn hello() -> String {

    format!("Hello world!")

}



#[tokio::main]

async fn main() -> Result<(), std::io::Error> {



    let app = Route::new().at("/", get(hello)).with(Tracing);

    Server::new(TcpListener::bind("0.0.0.0:8080"))

        .name("hello-world")

        .run(app)

        .await



}

运行结果如下:

对于 50、100 和 150 个连接的每次测试,总共执行 1.000.000 个请求。

结果我们以表格表示如下。

50个并发:

100个并发:

150个并发:

根据条件,判断以下结果:

  • Tide 是最慢的(12 秒只能完成 1M 请求,平均 159K 请求/秒)。

  • Axum是最快的(可以在6秒内完成1M请求)。

  • 所有竞争对手的资源使用情况几乎相同。

获胜者:Axum

以上只是在服务器上运行什么都不做的“hello world”。对于更复杂的项目,性能提升的幅度可能不会那么大。

本文的源代码提供给全体开发者下载,GitHub地址为:

https://github.com/randiekas/rust-web-framework-benchmark

感谢大家阅读这篇文章,希望对你有用~