Golang detect in-flight requests

I was wondering if there is already a library to do that or maybe a suggestion which way to go for the following problem:

Client A makes request for resource A, this is a long running request since resource A is expensive and it results in a cache miss. In the meantime client B makes request for resource A, now it's still a cache miss since client A's request hasn't returned and populated the cache yet. so instead of making a new request to generate resource A, client B should block and be notified when client A's request is complete and has populated the cache.

I think the group cache library has something along those lines, but I haven't been able to browse through the code to figure out how they do it, I also don't wanna tie the implementation to it and use it as a dependency.

The only solution I had so far is a pub-sub type of thing, where we have a global map of the current in-flight requests with the reqID as a key. When req1 comes it sets its ID in the map, req2 comes and checks if its id is in the map, since its requesting the same resource it is, so we block on a notifier channel. When req1 finishes it does 3 things:

  1. evicts its ID from the map
  2. saves the entry in the cache
  3. sends a broadcast with its ID to the notifier channel req2 receives the notification, unblocks and fetches from the cache.

Since go doesn't have built in support for broadcasts, theres probably 1 grouting listening on the broadcast channel and then keeping a list of subscribers to broadcast to for each request, or maybe we change the map to reqId => list(broadcastChannelSubscribers). Something along those lines.

If you think there is a better way to do it with Go's primitives, any input would be appreciated. The only piece of this solution that bothers me is this global map, surrounded by locks, I assume it quickly is going to become a bottleneck. IF you have some non-locking ideas, even if they are probabilistic Im happy to hear them.

It reminds me of one question where someone was implementing a similar thing:

Coalescing items in channel

I gave an answer with an example of implementing such a middle layer. I think this is in line with your ideas: have a routine keeping track of requests for the same resource and prevent them from being recalculating in parallel.

If you have a separate routine responsible for taking requests and managing access to cache, you don't need an explicit lock (there is one buried in a channel though). Anyhow, I don't know specifics of your application, but considering you need to check cache (probably locked) and (occasionally) perform an expensive calculation of missing entry – lock on map lookups doesn't seem like a massive problem to me. You can also always span more such middle layer routines if you think this would help, but you would need a deterministic way of routing the requests (so each cache entry is managed by a single routine).

Sorry for not bringing you a silver bullet solution, but it sounds like you're on a good way of solving your problem anyway.

Introducing HTTP Tracing, In Go 1.7 we introduced HTTP tracing, a facility to gather fine-grained information throughout the lifecycle of an HTTP client request. Support for� OGN automate flightlog written in Golang & PHP. Contribute to snip/flightLGo development by creating an account on GitHub.

Caching and perfomance problems are always tricky and you should always make a basic solution to benchmark against to ensure that your assumptions are correct. But if we know that the bottleneck is fetching the resource and that caching will give significant returns you could use Go's channels to implement queuing. Assuming that response is the type of your resource.

type request struct {
     back chan *response

func main() {
    c := make(chan request,10) // non-blocking
    go func(input chan request){
        var cached *response
        for _,i := range input {
            if cached == nil { // only make request once
                cached = makeLongRunningRequest()
            i.back <- cached

    resp := make(chan *response)

    c <- request{resp} // cache miss
    c <- request{resp} // will get queued
    c <- request{resp} // will get queued

    for _,r := range resp {
        // do something with response

Here we're only fetching one resource but you could start one goroutine for each resource you want to fetch. Goroutines are cheap so unless you need millions of resources cached at the same time you should be ok. You could of course also kill your goroutines after a while.

To keep track of which resource id belongs to which channel, I'd use a map

map[resourceId]chan request

with a mutex. Again, if fetching the resource is the bottle neck then the cost of locking the map should be negligible. If locking the map turns out to be a problem, consider using a sharded map.

In general you seem to be well on your way. I'd advise to try to keep your design as simple as possible and use channels instead of locks when possible. They do protect from terrible concurrency bugs.

http, Get, Head, Post, and PostForm make HTTP (or HTTPS) requests: Constants: Variables: func CanonicalHeaderKey(s string) string: func DetectContentType( data []byte) CancelRequest cancels an in-flight request by closing its connection. OGN automate flightlog written in Golang & PHP. Contribute to mgomersbach/flightLGo development by creating an account on GitHub.

One solution is a concurrent non-blocking cache as discussed in detail in The Go Programming Language, chapter 9.

The code samples are well worth a look because the authors take you through several versions (memo1, memo2, etc), illustrating problems of race conditions, using mutexes to protect maps, and a version using just channels.

Also consider https://blog.golang.org/context as it has similar concepts and deals with cancellation of in flight requests.

It's impractical to copy the content into this answer, so hopefully the links are of use.

singleflight, Check it out at pkg.go.dev/golang.org/x/sync/singleflight and share your the given function, making sure that only one execution is in-flight for a given key at a time. Future calls to Do for this key will call the function rather than waiting for an� It instruments the provided http.Handler with two metrics: A counter vector "promhttp_metric_handler_requests_total" to count scrapes partitioned by HTTP status code, and a gauge "promhttp_metric_handler_requests_in_flight" to track the number of simultaneous scrapes.

This is already provided by Golang as a feature single flight.

For your use case just use some extra logic on top of single flight. Consider the code snippet below:

func main() {
    http.HandleFunc("/github", func(w http.ResponseWriter, r *http.Request) {
        var key = "facebook"
        var requestGroup singleflight.Group
        // Search The Cache, if found in cache return from cache, else make single flight request
        if res, err := searchCache(); err != nil{
            return res
        // Cache Miss-> Make Single Flight Request, and Cache it
        v, err, shared := requestGroup.Do(key, func() (interface{}, error) {
            // companyStatus() returns string, error, which statifies interface{}, error, so we can return the result directly.
            if err != nil {
                return interface{}, err
            return companyStatus(), nil
        if err != nil {
            http.Error(w, err.Error(), http.StatusInternalServerError)
        //Set the Cache Here
        setCache(key, v)

        status := v.(string)

        log.Printf("/Company handler requst: status %q, shared result %t", status, shared)

        fmt.Fprintf(w, "Company Status: %q", status)

    http.ListenAndServe("", nil)

// companyStatus retrieves Comapny's API status
func getCompanyStatus() (string, error) {
    log.Println("Making request to Some API")
    defer log.Println("Request to Some API Complete")

    time.Sleep(1 * time.Second)

    resp, err := http.Get("Get URL")
    if err != nil {
        return "", err
    defer resp.Body.Close()

    if resp.StatusCode != 200 {
        return "", fmt.Errorf("Upstream response: %s", resp.Status)

    r := struct{ Status string }{}

    err = json.NewDecoder(resp.Body).Decode(&r)

    return r.Status, err

I hope the code snippet is self explanatory and you can refer to Single Flight Official Docs to delve deep into single flight.

Go: Avoid duplicate requests with sync/singleflight, In the example, you can see that we have a simple handler which makes a request to GitHub and returns the API status to our client as a text� Web developers make http requests all the time. In this post, we’re going to make some http requests using Golang. I am going to show you how I make http GET and POST requests using the net/http built-in package. Golang Http. Golang http package offers convenient functions like Get, Post, Head for common http requests. Let’s see the

Golang: Introduction to Race Conditions for the Web Engineer, Go provides a first-class primitive for concurrency called a Goroutine. are tricky to detect, and how they can be detected using go's -race detector. If a client makes a request while a current request is in flight they are either� As a result, lscpu crashes there. Implement WSL detection, and as a side effect, work around the crash. Note that none of existing virtualization types exactly matches. But the the closest would be "container". References: Provide a way to positively detect WSL from an app compiled on Linux.

Enabling CORS on a Go Web Server, Enabling CORS on a Go Web Server CORS does pre-flight requests sending an OPTIONS request to any URL, so to handle a Download my free books, and check out my premium React/Vue/Svelte/Node/Next.js courses! Would it be possible to detect the problem case by looking at the URL of the request maybe? If it is readable (all but the hostname is encrypted on https - right?), it should be a rouge http request. Not sure if this works tho - just some food for thought.

Basics Tutorial – gRPC, Use the Go gRPC API to write a simple client and server for your service. A simple RPC where the client sends a request to the server using the stub no more messages: the server needs to check the error returned from Read() after each call. our RPC's behaviour if necessary, such as time-out/cancel an RPC in flight. A Request represents an HTTP request received by a server or to be sent by a client. The field semantics differ slightly between client and server usage. In addition to the notes on the fields below, see the documentation for Request.Write and RoundTripper. type Request struct { // Method specifies the HTTP method (GET, POST, PUT, etc.).

  • seems pretty much the same, in my case since the request is quite heavy the locks prob won't be the bottleneck, i was just wondering if there is a good lock free way of doing it
  • I think that's pretty much what I'm suggesting, however i think your approach with that: map[resourceId]chan request won't work, since you need multiple channels per resourceId, since you can have multiple future requests pending on the same resourceId, and you can't broadcast to multiple subscribers via one channel, so u need a channel per subsequent request, more like: map [res_id] -> [slice[channels_to_notify]]
  • @Feras yes I think it will work (but I haven't tested it). The broadcasting pattern is uncommon in Go, and this is not broadcasting. The goroutine is only aware of one request at a time. For each request from the input channel it puts one *response on the output channel. Having one input channel is a feature since it acts as a thread-safe queue when you send multiple inputs at once. Making it unbuffered would block execution as you wanted. In a real application you'd probably want distinct output channels which you can get like this c <- request{chan1},c <- request{chan2} etc
  • And as a bonus, when you close the input channel the goroutine will exit cleanly and the cached *response will be garbage collected. Useful if you only want to keep stuff in the cache for a limited time. So most of the complexity is kept in the goroutine which means less risk of having multithreaded stuff pollute your program.
  • You are actually right, I forgot that channels have this type of behaviour and close(chan) can be used for synchronization. IE all request with same reqID wait on the channel in the map for that reqID, as soon as its closed they unblock and continue their execution fetching from cache. Much simpler and dont need to broadcast...
  • Thank you very, very much for pointing out this hidden gem in the Golang x libraries!