Discussion:
The benchmark of Artanis: guile server, Fibers, and Ragnarok
Nala Ginrut
2018-05-12 15:43:04 UTC
Permalink
hi Arne!
Thanks for the reply!
Ragnarok and pristine Guile both let specific requests starve, while
fibers accepts higher average latency to avoid high maximum latency.
Is this repeatable? If yes, then you can see the difference in
scheduling here: With fibers none of the 1000 requests has to wait more
than a second, while with pristine guile and with ragnarok some requests
can stall everything. If you have a lot of resources being loaded to
display a page, the maximum latency is the effective page load delay.
Yes, Ragnarok is not preempt-able yet, so it may delay too long when a
big request stall.
There're only 4 situations for a task could be scheduled:
1. I/O blocking
2. The socket buffer is full (users may tweak it)
3. Resources are insufficient to allocate (listening sockets, DB
connection pool, etc...)
4. Developers call (break-task) explicitly in the handler

I would like to make it preempt-able, but I still don't get the skill
how to do it from outside of a delimited-continuation.
And I would like to implement a better scheduler, for now it's just
simple FIFO. But I need to know how many size left
when suspend-able ports were blocking. It seems there's not interface
for me to get that size. Maybe there should be
a patch for it.
I left out the 4-instance ragnarok test, because its coping with latency
in the face of overload is not comparable, since it is less highly
overloaded (and that’s the feature which struck me while reading).
And anyway: These are already pretty good numbers. They don’t achieve
the level of static file serving with massive caching (in my tests
lighttpd could get more than a factor 2 increase over (fibers web
server)), but it’s already on a level where it could support around 500
active users on a single instance running on consumer hardware.
I have to mention that Django of Python got 700 req/s in the same test
under my same condition.
But it's trivial since Django is not good at performance but the
security and full-featured web stuffs.
I think the best choice is that to use Nginx for reverse-proxy, since
Nginx handles static files that may get 300,000 req/s though-out.
No one can compete with it for static files handling.
What I also see is that Artanis seems to have low overhead. How do the
numbers change with more complex pages?
I have to mention that even I have modified the code to do real json
serialization from assoc-list, the test result still remains.
But OK it's the credit of the author of guile-json ;-)
For more complex pages and DB based dynamic pages, I've tested before
roughly, it's not bad. And I have many ideas to optimize it. So
no hurry to test the current things.

I'm going to submit Artanis to Techempower for full tests, there'll be
more con-vincible test result. But before that, I have to finish all
the
features in my TODO, and make it more stable to avoid crash. The
version 0.2.5 is already very stable by eliminated many exceptions.
But I still need
more users to test it and feed back.

Best regards.
Nala Ginrut
2018-10-11 06:03:05 UTC
Permalink
To whom may care, with our latest Guile-2.9.1, the requests/sec has
increased around 19.6% compared to the same test under this topic.
Nice job!
Post by Nala Ginrut
hi Arne!
Thanks for the reply!
Ragnarok and pristine Guile both let specific requests starve, while
fibers accepts higher average latency to avoid high maximum latency.
Is this repeatable? If yes, then you can see the difference in
scheduling here: With fibers none of the 1000 requests has to wait more
than a second, while with pristine guile and with ragnarok some requests
can stall everything. If you have a lot of resources being loaded to
display a page, the maximum latency is the effective page load delay.
Yes, Ragnarok is not preempt-able yet, so it may delay too long when a
big request stall.
1. I/O blocking
2. The socket buffer is full (users may tweak it)
3. Resources are insufficient to allocate (listening sockets, DB
connection pool, etc...)
4. Developers call (break-task) explicitly in the handler
I would like to make it preempt-able, but I still don't get the skill
how to do it from outside of a delimited-continuation.
And I would like to implement a better scheduler, for now it's just
simple FIFO. But I need to know how many size left
when suspend-able ports were blocking. It seems there's not interface
for me to get that size. Maybe there should be
a patch for it.
I left out the 4-instance ragnarok test, because its coping with latency
in the face of overload is not comparable, since it is less highly
overloaded (and that’s the feature which struck me while reading).
And anyway: These are already pretty good numbers. They don’t achieve
the level of static file serving with massive caching (in my tests
lighttpd could get more than a factor 2 increase over (fibers web
server)), but it’s already on a level where it could support around 500
active users on a single instance running on consumer hardware.
I have to mention that Django of Python got 700 req/s in the same test
under my same condition.
But it's trivial since Django is not good at performance but the
security and full-featured web stuffs.
I think the best choice is that to use Nginx for reverse-proxy, since
Nginx handles static files that may get 300,000 req/s though-out.
No one can compete with it for static files handling.
What I also see is that Artanis seems to have low overhead. How do the
numbers change with more complex pages?
I have to mention that even I have modified the code to do real json
serialization from assoc-list, the test result still remains.
But OK it's the credit of the author of guile-json ;-)
For more complex pages and DB based dynamic pages, I've tested before
roughly, it's not bad. And I have many ideas to optimize it. So
no hurry to test the current things.
I'm going to submit Artanis to Techempower for full tests, there'll be
more con-vincible test result. But before that, I have to finish all
the
features in my TODO, and make it more stable to avoid crash. The
version 0.2.5 is already very stable by eliminated many exceptions.
But I still need
more users to test it and feed back.
Best regards.
Arne Babenhauserheide
2018-10-11 21:41:55 UTC
Permalink
Post by Nala Ginrut
To whom may care, with our latest Guile-2.9.1, the requests/sec has
increased around 19.6% compared to the same test under this topic.
Nice job!
That’s great! Thank you for checking!

Best wishes,
Arne
--
Unpolitisch sein
heißt politisch sein
ohne es zu merken
Loading...