Seastar, our new C++ framework for high-throughput server applications, is designed to make it possible to write code that is both scalable to large numbers of CPU cores and also straightforward to work with. The latest advance in the “straightforward to work with” department is a new tutorial.
Ready to see how Seastar’s futures, promises, and continuations model really works? Great—grab and build a copy of Seastar, and you’re ready to start the tutorial.
You probably aren’t used to doing a “sleep” like this…
1 2 3 4 5 6 |
|
…but that’s a simple example of futures-based programming. Make the basics work, and just a few examples later you’ll be ready to do real parallel programming. It’s parallel sleep to keep the example simple, but the same concepts apply even to complex applications.
1 2 3 4 5 6 7 |
|
Try that with your grandpa’s POSIX system calls! In one short tutorial, you’ll soon be creating parallel programs that don’t just run fast, they run right.
The tutorial features several short, ready-to-run programs that will build from a basic hello world
up to all the essentials of the futures/promises/continuations model. Topics covered in sample code include:
Threads in Seastar
Futures and continuations basics
Capturing state in continuations
You’ll also learn how to use pkg-config
with Seastar, and runtime options for memory management. The tutorial is a work in progress, so watch for more advanced info on Seastar networking, exception handling, and more.
Try it out
The new tutorial is on the project Wiki. If you have any questions, we’re on the seastar-dev mailing list, and you’re welcome to join the conversation.