How to deploy Machine learning Model to existing website

I'm Familiar with algorithms implementation and new to machine learning but i have gap between academic and production.

I'm implementing recommender system and learning Model finished with good results then i stopped and asked what to do next? how to deploy it with existing website

during learning i used CSV dataset and local machine but online will be database with hundreds of thousands of users and thousands of users. so i think it is impossible to load all data and recommend things to user.

the question is: how will i use my trained models in production?


When you said "database with hundreds of thousands of users and thousands of users." I guess you meant "hundreds of thousands of users and thousands of items". Do you have a User collaborative filtering or an Item-Item filtering?

If so, I guess thoose numbers (10K*1K) wont be a problem for any decent relational database. Basically you create a table, let say "Rattings", where you store: UserId, ItemId and Ratting (you can omit this ratting filed if your "features" are binaries ex: Item purchased or not).

If your user-item matrix is sparse this table will be small. Also you create an "Users" table where, after any insertion in the "Rattings" table, you can pre-compute, for example, the user average ratting if you need to normalize the predictions and other data you may need. As a rule of thumb, dont do very complex calculus involving scanning other tables when you are inserting but do simple maths if this helps to avoid doing complex scanning in other tables when data is retrieved for calculating the predictions/recommendations.

You could get some ideas from here: http://lemire.me/fr/documents/publications/webpaper.pdf

Take into account the Relational Db is an storage, even it could calculate almost everything using Sql, the regular scenario is using the relational Db for filtering and joining and then do the maths in other layer/trier.

For example, to deploy as a web service on Azure Kubernetes Service, use AksWebService.deploy_configuration () to create the deployment configuration. The following Python code defines a deployment configuration for a local deployment. This configuration deploys the model as a web service to your local computer.


You can either deploy the trained model to production yourself by creating an API, and deploying that on at least a couple instances connected with a load balancer.

The other route is to use a service that handles that for you. My service mlrequest makes it very simple to create and deploy high-availability, low-latency models.

You can use the reinforcement learning model that is well suited to making recommendations. Below is an example of a movie recommender.

from mlrequest import RL
rl = RL('your-api-key')
features = {'favorite-genre': 'action', 'last-movie-watched': 'die-hard'}
r = rl.predict(features=features, model_name='movie-recommender', session_id='the-user-session-id', negative_reward=0, action_count=3)
best_movie = r.predict_result[0]

When the model does a good job (e.g., the user clicked on or started watching the predicted movie) then you should reward the model by giving it a positive value (in this case, a 1).

r = rl.reward(model_name='movie-recommender', session_id='the-user-session-id', reward=1)

Every time the model is rewarded for a correct action it took, it will learn to do those actions in that context more often. The model therefore learns in real-time from its own environment and adjusts its behavior to give better recommendations. No manual intervention needed.

Flask is a simple web application framework that we can use to build the backend of web apps and get started quickly. In this tutorial, I will explain enough to create the web app around your Machine Learning model. Unfortunately, I will not be explaining every line of Flask code and structure; you can learn more about Flask here.


I don't really get what you mean here. Normally, you train your model with a certain set of training data. Then you validate your model using a certain set of test data. If the model shows promissing results(e.g. is "trained well") it goes to production.

Take shopping recommondations as an example. You've trained a model to suggest the next product based on previous products. Now, in production, if your customer buys/looks at a new product, you feed your algorithm with this product and it gives you a set of other products you might choose. This is just the same as with your training/validation data.

The source of data genereally is not relevant. But maybe I really don't get your question. Could you elaborate?

Surprisingly machine learning deployment is rarely discussed online. Bootcamps and grad programs don’t teach students how to deploy models. If you do a google search, you’ll find a lot of blog posts about standing up Flask APIs on your local machine, but none of these posts go into much detail beyond writing a simple endpoint.


Here is an example of how to Integrate a Machine Learning Model into a Web app https://www.youtube.com/watch?v=mu-R0dQ3-Qo and the code can be found here https://github.com/shivasj/Integrating-a-Machine-Learning-Model-into-a-Web-app

Deploying your machine learning model is a key aspect of every ML project; Learn how to use Flask to deploy a machine learning model into production; Model deployment is a core topic in data scientist interviews – so start learning! Introduction. I remember my early days in the machine learning space. I loved working on multiple problems and


Preprocessing the data and creating the model. After we have prepared the data and saved all necessary files it is time to start creating the API to serve our model from. NOTE: There are several methods for saving a model, each with its own sets of pros and cons. Aside from pickling, you can also use Joblib or use LightGBM’s internal


It is possible to deploy an already trained model in Azure Machine Learning using the Azure Machine Learning portal GUI only, and without a single line of additional code. However, the deployment of a web endpoint in a single container (which is the quickest way to deploy a model) is only possible via code or the command-line.


Training a machine learning model on a local system. We need some machine learning model that we can wrap in a web-service. For demo purpose, I chose a logistic regression model to do multiclass classification on iris dataset (Yep, super easy! #LazinessFTW). The model was trained on a local system using python 3.6.