Cloudius Systems

Getting Started With Seastar

Seastar, our new C++ framework for high-throughput server applications, is designed to make it possible to write code that is both scalable to large numbers of CPU cores and also straightforward to work with. The latest advance in the “straightforward to work with” department is a new tutorial.

Ready to see how Seastar’s futures, promises, and continuations model really works? Great—grab and build a copy of Seastar, and you’re ready to start the tutorial.

You probably aren’t used to doing a “sleep” like this…

std::cout << "Sleeping... " << std::flush;
        using namespace std::chrono_literals;
        sleep(1s).then([] {
            std::cout << "Done.\n";

…but that’s a simple example of futures-based programming. Make the basics work, and just a few examples later you’ll be ready to do real parallel programming. It’s parallel sleep to keep the example simple, but the same concepts apply even to complex applications.

void f() {
    std::cout << "Sleeping... " << std::flush;
    using namespace std::chrono_literals;
    sleep(200ms).then([] { std::cout << "200ms " << std::flush; });
    sleep(100ms).then([] { std::cout << "100ms " << std::flush; });
    sleep(1s).then([] { std::cout << "Done.\n"; engine_exit(); });

Try that with your grandpa’s POSIX system calls! In one short tutorial, you’ll soon be creating parallel programs that don’t just run fast, they run right.

The tutorial features several short, ready-to-run programs that will build from a basic hello world up to all the essentials of the futures/promises/continuations model. Topics covered in sample code include:

  • Threads in Seastar

  • Futures and continuations basics

  • Capturing state in continuations

You’ll also learn how to use pkg-config with Seastar, and runtime options for memory management. The tutorial is a work in progress, so watch for more advanced info on Seastar networking, exception handling, and more.

Try it out

The new tutorial is on the project Wiki. If you have any questions, we’re on the seastar-dev mailing list, and you’re welcome to join the conversation.

Meetup Notes: Back to the Future With C++ and Seastar

By Tzach Livyatan

Cloudius founder Avi Kivity presented “Back to the future with C++ and Seastar” at the recent Sayeret Lambda Meetup group, helping to revise the audience’s impressions of the C++ language. C++ is often thought of as a legacy imperative language with roots in 1970s C. But in the past few years it has been thoroughly modernized, now offering streamlined support for modern paradigms such as lambda, metaprogramming, and functional programming, while retaining no-compromise performance.

Photo: Tzach Livyatan for Cloudius Systems

Seastar is a modern, open source server application framework written in C++ that presents a future/promise based API to the user while delivering top-of-the line performance—more than five times the nearest competitor, with 7 million requests per second served on a single machine.

The Meetup group had a good attendance of about 35 people. Some of the Seastar questions included:

  • Can you run Seastar on a subset of the cores? (answer: yes)

  • How do you pin memory? (answer: Each thread is preallocated with a large piece of memory. By default, the machine’s entire memory except a small reservation left for the OS (defaulting to 512 MB) is pre-allocated for the application. Pages are NUMA bound to the local node with mbind.)

Here are some additional questions that were asked, and you’re welcome to use the mailing list to get answers:

  1. On the Memcache results: did you try running Memcache on DPDK without Seatsar?
  2. Did you test with and without HyperThreads?
  3. How do you debug/trace the execution task?
  4. Can you run Seastar on a subset of the cores?
  5. How do you use external libs from Seastar?
  6. Can you use it from Python?
  7. What happened when the system/queues are overloaded?
  8. Can you simulate Boost.Asio ?
  9. How do you pin memory?
  10. Are you production ready? when will 1.0 be available?
  11. Who Framed Roger Rabbit?

Seastar is designed to make it possible to write code that is both scalable to large numbers of CPU cores and also straightforward to work with.

If you have Seastar questions of your own, please join the seastar-dev mailing list, and and ask as many as you like. Or follow @CloudiusSystems on Twitter for announcements of future events.

Seastar: New C++ Framework for Web-scale Workloads

Today, we are releasing Seastar, a new open-source C++ framework for extreme high-performance applications on OSv and Linux. Seastar brings a 5x throughput improvement to web-scale workloads, at millions of transactions per second on a single server, and is optimized for modern physical and virtual hardware.

seastar Memcache graph

Benchmark results are available from the new Seastar project site.

Today’s server hardware is substantially different from the machines for which today’s server software was written. Multi-core design and complex caching now require us to make new assumptions to get good performance. And today’s more complex workloads, where many microservices interact to fulfil a single user request, are driving down the latencies required at all layers of the stack. On new hardware, the performance of standard workloads depends more on locking and coordination across cores than on performance of an individual core. And the full-featured network stack of a conventional OS can also use a majority of a server’s CPU cycles.

Seastar reaches linear scalability, as a function of core count, by taking a shard-per-core approach. SeaStar tasks do not depend on synchronous data exchange with other cores which is usually implemented by compare-exchange and similar locking schemes. Instead, each core owns its resources (RAM, NIC queue, CPU) and exchanges async messages with remote cores. Seastar includes its own user-space network stack, which runs on top of Data Plane Development Kit (DPDK). All network communications can take place without system calls, and no data copying ever occurs. SeaStar is event-driven and supports writing non-blocking, asynchronous server code in a straightforward manner that facilitates debugging and reasoning about performance.

Seastar is currently focused on high-throughput, low-latency network applications. For example, it is useful for NoSQL servers, for data caches such as memcached, and for high-performance HTTP serving. Seastar is available today, under the Apache license version 2.0.

Please follow @CloudiusSystems on Twitter for updates.