TrueFoundry is recognized in the 2025 Gartner® Market Guide for AI Gateways! Read the full report

Helping every child read with Wadhwani AI

AI solution to assess and improve the reading skills of children in underserved communities

Wadhwani AI is a non-profit organization that works on multiple turnkey AI solutions for underserved populations in developing countries.

Through the Vachan Samiksha project, the team is developing a customized AI solution that teachers in rural India can use to assess the reading fluency of students and develop a personalized contingency plan to improve the reading skills of each student.

The team had deployed the solution in primary schools for conducting pilots. However, the team was facing the following issues that needed to be solved before the project’s scope was expanded to more schools and students:

  1. Very high computing cost: The Vachan Samiksha model needed GPUs to make inferences, and hence the team had to bear very high costs for keeping GPU instances provisioned over the entire deployment duration.
  2. Scaling was limited: By the ML instances quota of GPUs that the team could get on the managed ML service, for which the process was slow and involved making a business case. Getting non-managed ML instances on raw Kubernetes was much easier try, the team has built an accent-inclusive model to assess fluency in regional and English
  3. Some requests took a lot of time to respond: The pilots were conducted in 1000s of schools, and Millions of students simultaneously. This required the system to scale horizontally when the request throughput increases. Hoonwever, the managed ML Service was taking upwards to 9 minutes before being able to scale, giving a poor experience to the end user

TrueFoundry team partnered with the team to solve these problems. Using the TrueFoundry platform, the team was able to:

  1. Scale the application to handle 10X Requests per second compared to the managed ML Service.
  2. Reduce the cloud cost incurred by ~55% with the same level of reliability and performance.
  3. Reduce the latency of requests by ~80% when the pods are scaling horizontally.

About Wadhwani AI

Wadhwani AI was founded by Romesh and Sunil Wadhwani (Part of the Times100 AI list) to harness AI to solve problems faced by underserved communities in developing nations. They partner with government and global nonprofit bodies worldwide to deliver value through the solution. As a not-for-profit, Wadhwani AI uses artificial intelligence to solve social problems in the fields of agriculture, education, and health, among others. Some of their projects include:

  • Pest management for cotton farms: The solution helps reduce crop losses by detecting and controlling pests that affect the cotton plant.
  • TB adherence prediction: Deployed at over 100 public health facilities, it helps identify high-risk patients, detect drug resistance, and help in TB diagnosis using ultrasound data.
  • Newborn anthropometry: A solution that measures baby weight using a smartphone camera and tracks growth indicators.
  • COVID-19 forecasting and diagnosis: A solution that predicts the spread of the pandemic and detects COVID-19 infection using cough sounds.

Wadhwani AI also works with partner organizations to assess their AI-readiness, which is their ability to create and use AI solutions effectively and sustainably. Wadhwani AI’s work aims to use AI for good and to improve the lives of billions of people in developing countries.

Wadhwani AI’s Oral Reading Fluency Tool: Vachan Samiksha

Reading skills are fundamental to any child's educational foundation. Unfortunately, many students from the rural and underprivileged regions of India and other developing nations lack these skills. To solve this problem on a foundational level, the Wadhwani AI team has developed an AI-based Oral Reading Frequency tool called the Vachan Samiksha.

The tool deploys AI to analyze every child’s reading performance. It is mostly targeted towards rural and semi-urban regions of the country at the moment and is being used across age groups. To make the solution generalizable for most of the country, the team has built an accent-inclusive model to assess regional languages and English. Manual assessment of these skills have their biases and are often inaccurate.

The solution is served to the users (teachers of target schools) through an app that invokes the model that is deployed on the cloud. The student is made to read a paragraph, which is recorded by the application and sent to the cloud. On the cloud, the model assesses reading accuracy, speed, comprehension, and other complex learning delays that could be missed in a normal evaluation. Besides assessing these skills, the application also creates a personalized learning plan for each student to facilitate their learning and also creates demographical reports for macro-level actions by the government authorities. The team had deployed the model for the pilot with the cloud provider's managed ML service

When we started our collaboration with the Vachan Samiksha team within Wadhwani AI, the team had been leveraging the native MLOps stack of their cloud provider to deploy the model for its pilot with the Education Department of Gujarat.

Their infrastructure setup was as follows:

  1. Managed Async Endpoint: The team wanted an asynchronous inference engine since the model could take some time (~5-7 seconds) for the model to infer. When the application got a lot of traffic simultaneously, it needed to store the requests intermittently before a worker could pick it up and infer on it. Cloud provider's async endpoint internally makes use of its native queue.
  2. Managed Container Service: The team was using the managed container service to host the backend service for the application.
  3. Queue workers: Managed MLOps service used ML reserved instances for queue workers to pick up requests from the queue and infer on them.
  4. Data Source: The Queue was being written to the cloud provider's storage system and read from it
  5. SNS: it was used as the broker to publish the output path and the success/failure messages from the output message queue
Vachan Samiksha Team's Architecture with Cloud Provider's Managed ML Service

Challenges that the team had been facing

The team faced challenges with this setup while trying to conduct the first pilot, which motivated them to try out other solutions:

Scaling was limited

The pilot was anticipated to run at a huge scale (~6 Million students in a month). However, the team did not have confidence that the managed ML service would be able to support this scale because:

  1. Separate Quota: Managed ML service has a separate quota and allocation for ML instances that was difficult to get more of.
  2. Difficult to get ML Instance Quota: To get extra quota is a slow process and the team needed to make a business case to be able to be eligible for more quota. Even when the team was allocated more quota, it was barely 1/10th of the quota that the team expected.
  3. Getting non-ML Instances is much easier: The team found getting quota for non-ML instances much easier. However, it was difficult for the team to use it in their pilot without the meanaged MLOps tools.

Support was slow

During the pilot, the team faced issues with the scaling speed, and some pods did not come up as expected. However, to resolve the issue, the team contacted the cloud provider's representatives, who then contacted the technical team. This induced a delay in the system and caused a delay in the pilot.

Scaling was slow

When request traffic increased during the pilot, the pods were required to scale horizontally (Spin up new nodes that could pick up and process some of the requests from the queue). This process took ~9-10 minutes for each new pod that was spun up, resulting in delayed responses and a poor experience for the end user.

Unsustainably high costs

GPU instances are very expensive due to the global shortage of chips. Add on top of this the 20-40% markup for ML instances that the cloud provider puts. This made the cost of the instances very high and infeasible for the team at the scale that they wanted to run the project.

The system was ready for deployment with TrueFoundry in less than a week

When we met the Vachan Samiksha team, they were in the period between their first pilot and the second. The pilot was less than a week away and we had to:

  1. Set up the TrueFoundry platform on their cloud Infrastructure (Since the data is very sensitive and no data was allowed to go beyond the project’s VPC)
  2. Onboard the team and walk them through the different functionalities of the platform.
  3. Migrate the Vachan Samiksha application to the platform
  4. Load testing the application and benchmark the horizontal scaling

Pilot was ready to be shipped with TrueFoundry in <1 Week

During the time before the pilot:

Platform Installation

Our team helped the Wadhwan AI Team install the platform on their own raw Kubernetes. The control plane and the workload cluster were both installed on their own infrastructure. All of the Data, UI elements to interact with the platform, and the workload processes for training/deploying the models remained within their own VPC. The platform also complied with all the company's security rules and practices.

Training and Onboarding

We helped the team understand how the different components interact during the training and onboarding process. We walked them through how to set up resources, configure autoscaling, and deploy the model.

Migration

The Wadhwani AI team was able to migrate the application on its own with minimal help from the TrueFoundry team. This was done in a 1-hour call with the team.

Testing

After the application was deployed, the team started testing production level load on it. The team independently scaled up the application to more than 100 nodes through a simple argument on TrueFoundry UI which is 5X their previous highest achievable scale. They also tried benchmarking the speed of node scaling, which was much (3-4 X) faster than that provided by their .

Shipping

With the load tests done, the team deployed the pilot application and was prepped for rolling it out in the second phase of the pilot which was rolled out to 1000 schools, 9000 Teachers, and over 2 Lakh students.

More control at a much lesser cost with TrueFoundry

Application Architecture with TrueFoundry

With a minimal effort of less than 10 hours, the Wadhwani AI team was able to realize a significant improvement in speed, control, and costs. Some of the major changes that they realized were:

More Control and Visibility Developer Independence

The Data Scientists and Machine Learning Engineers were able to configure multiple elements which were either difficult for them to do through the cloud provider's console or they had to rely on the engineering team:

Configuring GPU node Auto-scaling policy

Based on queue length and increasing the maximum number of replicas/nodes to 70 instead of the previous limit of 20

Setting up time-based auto-scaling

Since most of the pilot traffic came in during school hours when the teachers interacted with the students, there were minimal requests, if any, during the evening and nigtionsht. The teamconstant, was able to set up a scaling schedule with which the pods scaled down to a minimum during the down hours (evening and nights). This saved about 15-20% of the pilot cost.

Utilization metrics and suggestions

The team could easily monitor the traffic, resource utilization, and responses directly from the TrueFoundry UI. They also received suggestions through the platform whenever there was an overprovisioning or underprovisioning of resources

"For me the biggest differentiator working with TrueFoundry was the ease of usage and the quick response and support provided by the team. I was able to setup and migrate our entire code base in less than 1 day which was amazing. During the pilot and whenever we had any doubts or request the TrueFoundry team was available immediately to solve our doubts and support us. Besides these factors we are getting a massive cost reduction which is super helpful for the project."

- Jatin Agrawal, Machine Learning Scientist @ Wadhwani AI

TrueFoundry helped the team scale while decreasing costs

5X faster scaling

To test scaling with TrueFoundry, the team sent a burst of 88 requests to the application and benchmarked the performance of the cloud provider's managed ML service vs. TrueFoundry. All system configurations were kept like the scaling logic (based on the backlog queue length, the initial number of nodes, instance type, etc.)

We realized that TrueFoundry could scale up 78% faster than managed ML service, which gave the user much faster responses. The end-to-end time taken to respond to the query was 40% less with TrueFoundry.

Autoscaling Test Results (A10g-4vCPUs, 2 Workers, 88 requests)
Managed ML Service TrueFoundry
Total Time to process all 88 requests 660s 395.9s
Time to scale up (1 worker to 2 worker) 9 min 2 min
Time before AutoScaler was triggered 2 min 30 secs 15 secs

50% lower cost

The cost that the team was incurring for the pilot was reduced by ~50% by moving to TrueFoundry, this was enabled by the following contributing factors:

  1. ~25-30% Reduction - Use of bare Kubernetes: Managed ML Instances come with an upmark of 25-40% for the same instance when provisioned directly on bare Kubernetes. Since TrueFoundry runs on K8s directly, the team saved a lot of costs here
  2. ~15-20% Reduction - Time-based autoscaling: The team scheduled the downscaling of pods when they expected lower traffic to the application. This saved the team 15-20% of the cloud costs.
  3. ~20-30% Reduction - Use of Spot instances: Spot instances are part of the unutilized infra of Cloud providers that they give out at 50-60% discounts. By enabling a simple flag in the UI, the team can use a mix of spot and on-demand instances. Spot instances risk getting de-provisioned but TrueFoundry has built a reliability layer that ensures that even with spot instances, the mix of on-demand and spot instances is managed to provide users with a reliable level of availability.

High GPU Availability with Lower Costs

While Managed ML Service was limited by the availability of GPU instances on the same region of the cloud provider, TrueFoundry can add worker nodes to the system that could be across any region or cloud provider.
This means that:

  1. High GPU Availability from multiple cloud providers/regions: Users can spin up nodes in a different region of the cloud that has higher GPU availability or with other cloud providers like AWS, E2E networks, RunPod, Azure, GCP, or others. This is critical since every company has beish success/failure messages since it allows users to subscribe to certain facing GPU quota limitations, and to ensure the system's reliability, it is necessary to have this kind of backup.
  2. Cost Reduction: Different cloud providers have different pricing of GPU instances. This can vary by even 40-80% between one provider and another. TrueFoundry lets the user connect any GPU provider to a singular control plane and allows seamless scaling across these cloud vendors with the option to choose a lower-cost vendor if they have the availability to save on the costs.

Use the best tools without any limitations

TrueFoundry provides seamless integration with any tool that the team wants to use. With the cloud provider, this was limited by the design choices taken by the provider and their native integrations. For example, the team wanted to use NATS to publish messages, which cloud provider's native service did not currently offer. Making these kinds of choices was made trivial for the Wadhwani AI Team by TrueFoundry.

The fastest way to build, govern and scale your AI

Operate your ML Pipeline from Day 0

pipeline