Developer Blogs

WebTorch: A Load Balancer That Learns

Post by:
Lef Ioannidis
August 4, 2021
Post by:
No items found.
August 4, 2021
WebTorch: A Load Balancer That LearnsWebTorch: A Load Balancer That Learns

In my previous blog post, “How I Stopped Worrying and Embraced Docker Microservices,” I talked about why Microservices are the bee’s knees for scaling Machine Learning in production. A fair amount of time has passed (almost a year ago, whoa), and it proved that building Deep Learning pipelines in production is a more complex, multi-aspect problem. Yes, microservices are a fantastic tool, both for software reuse, distributed systems design, quick failure, and recovery, yadda yadda yadda. But what seems very obvious now is that Machine Learning services are very stateful, and statefulness is a problem for horizontal scaling.

Context switching latency

An easy way to deal with this issue is to understand that ML models are large and thus should not be context switched. If a model is started on instance A, you should try to keep it on instance A as long as possible. Nginx Plus comes with support for sticky sessions, which means that requests can always be load balanced on the same upstream, a super useful feature. That was 30% of the message of my Nginxconf 2017 talk.

The other 70% of my message was urging people to move AWAY from microservices for Machine Learning. In an extreme example, we announced WebTorch, a full-on Deep Learning stack on top of an HTTP server, running as a single program. For your reference, a Deep Learning stack looks like this.

Pipeline required for Deep Learning in production.
What is this data > why is it so dirty >  alright, now it’s clean, but my Neural net still doesn’t get it >  finally, it gets it!

 Now consider the two extremes in implementing this pipeline:

  1. Every stage is a microservice.
  2. The whole thing is one service.

Both seem equally terrible for different reasons, and here I will explain why designing an ML pipeline is a zero-sum problem.

Communication latency

If every stage of the pipeline is a microservice, this introduces a huge communication overhead between microservices. This is because very large data frames which need to be passed between services also need to be:

  1. Serialized
  2. Compressed (+ Encrypted)
  3. Queued
  4. Transferred
  5. Dequeued
  6. Decompressed (+ Decrypted)
  7. Deserialized

What a pain. What a terrible thing to spend cycles on. All of these actions need to be repeated every time the microservice limit is crossed. The horror, the terrible end-to-end performance horror!

In the opposite case, you’re writing a monolith that is hard to maintain. You’re probably using uncomfortable semantics either for writing the HTTP server or the ML part, can’t monitor the in-between stages, etc. Like I said, writing an ML pipeline for production is a zero-sum problem.

An extreme example: All-in-one deep learning

Venn diagram of torch, nginx

Torch and Nginx have one thing in common, the amazing LuaJIT

 

That’s right. You’ll need to look at your use case and decide where you draw the line. Where does the HTTP server stop, and where does the ML back-end start? If only a tool made this decision easy and allowed you to even go to the extreme case of writing a monolith without sacrificing either HTTP performance (and pretty HTTP server semantics) or ML performance and relevance in the rapidly growing Deep Learning market. Now such a tool is here (in alpha), and it’s called WebTorch.

WebTorch is the freak child of the fastest, most stable HTTP server, nginx, and the fastest, most relevant Deep Learning framework, Torch.

Now, of course, that doesn’t mean WebTorch is either the best performance HTTP server and/or the best-performing Deep Learning framework, but it’s at least worth a look, right? So I ran some benchmarks, loaded the XOR neural network found on the torch training page. Next, I used another popular Lua tool, wrk, to benchmark my server. I’m sending serialized Torch 2D DoubleTensor tensors to my server using POST requests to train. Here are the results:




Huzzah! Over 1000 req/sec on my MacBook Air, with no Cuda support and 2 Intel cores!

 

So there, plug that into a CUDA machine and see how much performance you squeeze out of that bad baby. I hope I have convinced you that sometimes, mixing two great things CAN lead to something great and that WebTorch is an ambitious and interesting open-source project!

And hopefully, in due time, it will become a fast, production-level server, making it easy for Data Scientists to deploy their models in the cloud (do people still say cloud?) and DevOps people to deploy and scale.

Possible applications of such a tool include, but are not limited to:

  • Classification of streaming data
  • Adaptive load balancing
  • DDoS attack/intrusion detection
  • Detect and adapt to upstream failures
  • Train and serve NNs
  • Use cuDNN, cuNN, and cuTorch inside NGINX
  • Write GPGPU code on NGINX
  • Machine learning NGINX plugins
  • Easily serve GPGPU code
  • Rapid prototyping Deep Learning solutions

Maybe your own?


To learn about Prove’s identity solutions and how to accelerate revenue while mitigating fraud, schedule a demo today.


Create secure frictionless customer experiences using modern identity solutions

Join over 1,000 businesses that rely on Prove across multiple industries, including banking, FinTech, healthcare, insurance, and e-commerce. Contact us today.

Prove: the world’s most accurate identity verification and authentication platform

Trusted by 1,000+ leading companies to reduce fraud and improve consumer experiences. Contact us today to learn how you can frictionlessly secure your digital consumer journey — from onboarding to ongoing transactions.

Keep Reading...Read our latest white-paper on this subject!

Tap the button below to read our latest white-paper on the subject as industry leaders.

Accelerate your onboarding

Contact us to learn how leading companies are using Prove Pre-Fill to modernize the account creation process by shaving off clicks and keystrokes that kill conversion.

Create frictionless customer experiences

Get in touch to find out how we can help you identify your customers at every stage of their journey and offer them seamless and secure experiences.

Schedule a demo

Let our expert team guide you through our identity verification and authentication solutions. Select a date and time that works for you.

Schedule a demo

Find out how we can help you deliver seamless and secure customer experiences that comply with PSD2/SCA. Select a date and time that works for you.

Interested in more information about Prove Pre-Fill?

Download the Report

Download Aite-Novarica Group’s full report about Prove Pre-Fill, including a product overview, customer results, and how the product works.

Interested in more information about MFA?

Download the guide now to learn how you can improve security, cut down on fraud, and create the best possible customer experience.