From Research to Production using  FARO Architecture

FARO (expanded as Fast API Back-end- React Front-end- ONNX Model) architecture follows Microservices architecture where application is developed as multiple components. Each component will work independently and the application will work when the components are stacked together. This reduces the dependency between development teams while working on rapidly evolving business needs. FARO pipeline will help ML teams to deploy the product created in experimentation phase in to Production with relative ease. The end product will be highly scalable as per business needs.


  • Develop a RESTful API in Python using FastAPI and export ML/DL models in ONNX format.
  • Create and render React components in the browser 

  • Connect a React application to a FastAPI backend

  • Move the project in to Docker container so that it can be deployed on Cloud or on Premise.

  • Showcase a Petrophysical application that uses FARO. 


FARO Flow chart


Fast API is a latest, high-performance, web framework for building APIs with Python 3.6+ based on standard Python type hints. It provides automatic documentation and endpoints for testing the API. Utilization of uvicorn server ( a Asynchronous Server Gateway Interface) making Fast API a faster, efficient & effective framework to deploy it in production. Any databases can be connected to Fast API for information retrieval.




 React is a component based JavaScript framework for building interactive user interfaces. New features can be created without rewriting the existing code and JavaScript can handle rich data. Component based architecture helps in displaying only the needed components based on the change in data. React can be used for both web and mobile applications.

ONNX Model:

ONNX expanded as Open Neural Network Exchange defines a open standard format to convert machine learning and deep learning models trained using any of the tools,frameworks, runtimes and compilers. ONNX promotes inter operability and reduces hardware dependencies. This helps in moving projects from research to production seamlessly.



Docker is a container which holds the software packages and the configuration files required for the project. Docker images can be deployed anywhere be it cloud or on-premise.



  • Micro-service based architecture that can provide API for consumption to the front-end. This will ensure that the front-end will remain independent of back-end architecture and this will allow us quick prototyping and production release.
  • The Architecture should be re-usable and should accommodate the complexities of different projects.
  • The use of open-source tech-stack to ensure the Cost-to-Benefit ratio is high.
  • Scale-able  architecture that can scale as per the business needs.
Use Case - PETAI Application

PETAI app predicts bedding structure from FMI well logs. First, it predicts image through morphological features and then it runs unsupervised deep learning algorithm to predict image segmentation.Know More