Connect to Memorystore from Cloud Run

cloud memorystore
cloud run on gke
cloud run authentication
memorystore vs redis
cloud run pricing
cloud run port
google cloud memorystore for memcached
cloud run multiple containers

I want to run a service on Google Cloud Run that uses Cloud Memorystore as cache.

I created an Memorystore instance in the same region as Cloud Run and used the example code to connect: https://github.com/GoogleCloudPlatform/golang-samples/blob/master/memorystore/redis/main.go this didn't work.

Next I created a Serverless VPC access Connectore which didn't help. I use Cloud Run without a GKE Cluster so I can't change any configuration.

Is there a way to connect from Cloud Run to Memorystore?


Connecting to Google Cloud services, Connecting from a pod running a bash shell. Go to the GKE page in the Cloud Console. GKE; Click on the GKE cluster you'd like to connect from. If  Develop and run applications anywhere, using cloud-native technologies like containers, serverless, and service mesh. Memorystore Cloud Spanner connect to the


If you need something in your VPC, you can also spin up Redis on Compute Engine

It's more costly (especially for a Cluster) than Redis Cloud - but an temp solution if you have to keep the data in your VPC.

Connecting to a Redis instance, To connect from your Cloud Functions to your Redis instance's authorized VPC network, you must set up cd golang-samples/functions/memorystore/redis. Connecting from a pod running a bash shell. Go to the GKE page in the Cloud Console. GKE. Click on the GKE cluster you'd like to connect from. If you don't already have a cluster, create one in the same zone and region as your Redis Click the Connect button to the right of your cluster's name,


While waiting for serverless VPC connectors on Cloud Run - Google said yesterday that announcements would be made in the near term - you can connect to Memorystore from Cloud Run using an SSH tunnel via GCE.

The basic approach is the following.

First, create a forwarder instance on GCE

gcloud compute instances create vpc-forwarder --machine-type=f1-micro --zone=us-central1-a

Don't forget to open port 22 in your firewall policies (it's open by default).

Then install the gcloud CLI via your Dockerfile

Here is an example for a Rails app. The Dockerfile makes use of a script for the entrypoint.

# Use the official lightweight Ruby image.
# https://hub.docker.com/_/ruby
FROM ruby:2.5.5

# Install gcloud
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud \
  && tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
  && /usr/local/gcloud/google-cloud-sdk/install.sh
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin

# Generate SSH key to be used by the SSH tunnel (see entrypoint.sh)
RUN mkdir -p /home/.ssh && ssh-keygen -b 2048 -t rsa -f /home/.ssh/google_compute_engine -q -N ""

# Install bundler
RUN gem update --system
RUN gem install bundler

# Install production dependencies.
WORKDIR /usr/src/app
COPY Gemfile Gemfile.lock ./
ENV BUNDLE_FROZEN=true
RUN bundle install

# Copy local code to the container image.
COPY . ./

# Run the web service on container startup.
CMD ["bash", "entrypoint.sh"]

Finally open an SSH tunnel to Redis in your entrypoint.sh script

# !/bin/bash

# Memorystore config
MEMORYSTORE_IP=10.0.0.5
MEMORYSTORE_REMOTE_PORT=6379
MEMORYSTORE_LOCAL_PORT=6379

# Forwarder config
FORWARDER_ID=vpc-forwarder
FORWARDER_ZONE=us-central1-a

# Start tunnel to Redis Memorystore in background
gcloud compute ssh \
  --zone=${FORWARDER_ZONE} \
  --ssh-flag="-N -L ${MEMORYSTORE_LOCAL_PORT}:${MEMORYSTORE_IP}:${MEMORYSTORE_REMOTE_PORT}" \
  ${FORWARDER_ID} &

# Run migrations and start Puma
bundle exec rake db:migrate && bundle exec puma -p 8080

With the solution above Memorystore will be available to your application on localhost:6379.

There are a few caveats though

  1. This approach requires the service account configured on your Cloud Run service to have the roles/compute.instanceAdmin role, which is quite powerful.
  2. The SSH keys are backed into the image to speedup container boot time. That's not ideal.
  3. There is no failover if your forwarder crashes.

I've written a longer and more elaborated approach in a blog post that improves the overall security and adds failover capabilities. The solution uses plain SSH instead of the gcloud CLI.

Connecting to a Redis instance from Cloud Functions, At the moment I'm using Cloud Memorystore, managed is better right? Unlike GCP's other serverless offerings, Cloud Run can't connect to a VPC yet, which is​  After Cloud Shell launches, you can use the command line to create a new Memorystore instance: $ gcloud redis instances create myinstance --size=1 --region=us-central1 If Memorystore API was not


Why can't Cloud Run connect to Memorystore or VPCs yet?, for Redis is a fully managed Redis service for the Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments. Environments and services to run containers on Google Cloud. You cannot connect to a Memorystore for Redis instance from a Google Kubernetes Engine cluster


Memorystore for Redis documentation, is also available as a fully managed serverless platform, without Kubernetes. Memorystore for Redis is a fully managed Redis service for the Google Cloud. Applications running on Google Cloud can achieve extreme performance by leveraging the highly scalable, available, secure Redis service without the burden of managing complex Redis deployments.


Cloud Run For Anthos, Platform can achieve extreme performance by leveraging the highly scalable, highly available, and secure Redis service without the burden of managing complex Redis deployments. Some Google Cloud APIs, including the Memorystore API, use the concept of locations, which can represent either regions or zones. For the Memorystore API, locations map to regions. Although you can connect resources in different zones within the same region, you should provision resources in the same zone to improve network performance.