Django server not sending logs to Logstash

django logging
python send to logstash
python-logstash
gunicorn django logging
django centralized logging
django filebeat
elasticsearch django logs
gunicorn logstash

I am using ELK stack for centralised logging from my Django server. My ELK stack is on a remote server and logstash.conf looks like this:

input {
    tcp {
    port => 5959
    codec => json
  }
}
output {
  elasticsearch {
    hosts => ["xx.xx.xx.xx:9200"]
  }
}

Both services elasticsearch and logstash are working (checked using docker-compose logs logstash).

My Django server's settings file has logging configured as below:

LOGGING = {
  'version': 1,
  'handlers': {
        'logstash': {
            'level': 'INFO',
            'class': 'logstash.TCPLogstashHandler',
            'host': 'xx.xx.xx.xx',
            'port': 5959, # Default value: 5959
            'version': 0, # Version of logstash event schema. Default value: 0 (for backward compatibility of the library)
            'message_type': 'django',  # 'type' field in logstash message. Default value: 'logstash'.
            'fqdn': True, # Fully qualified domain name. Default value: false.
            'tags': ['django.request'], # list of tags. Default: None.
        },
  },
  'loggers': {
        'django.request': {
            'handlers': ['logstash'],
            'level': 'DEBUG',
  },
}
}

I run my Django server and Logstash handler handles the logs as console shows no logs. I used the python-logstash library in Django server to construct the above conf, but the logs are not sent to my remote server.

I checked through many questions, verified that services are running and ports are correct, but I have no clue why the logs are not being sent to Logstash.

Looking at the configuration, logger "django.request" is set to level "DEBUG" and the handler "logstash" is set to level "INFO". My guess is that the handler will not process DEBUG messages. I'm not sure though.

Set the same level for the logger and the handler to test that it works.

What level to use depends on what you want from your logs. In this case I guess level INFO would suffice.

If not already take a look at Django logging

NOTE: From comments it seems not to solve the problem but I hope it is useful anyway.

UPDATE:

I tried the following configuration and it catches 404 and 500 errors in the "debug.log".

LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'handlers': {
    'logfile': {
        'level': 'WARNING',
        'class': 'logging.FileHandler',
        'filename': os.path.join(PROJECT_DIR, 'debug.log'),
    },
},
'loggers': {
    'django.request': {
        'handlers': ['logfile'],
        'level': 'WARNING',
        'propagate': True,
    },
}}

With this test configuration the logstash handler should at least receive the message/logrecord. If no luck I suggest to try debug the logstash.TCPLogstashHandler and the SocketHandler (inherited by TCPLogstashHandler) to make sure they receive the emitted record.

Django centralised logging using Elasticsearch, Logstash , If not, we need to find a way of using a service to stream logs to our ELK server or fall back to python-logstash. Since we We will set up our Django logger config to send error and exception log messages to Sentry. This is a� For Django, we will make use of Python-logstash via pip install python-logstash a python logging handler for logstash. This will allow us send all our logs to logstash. We will also make use of django-elasticsearch-dsl module that will allow us interface with elasticsearch for this tutorial.

Your logging configuration is correct. You need to mention index name in elasticsearch config part of logstash conf. Update your logstash config to

input {
    tcp {
    port => 5959
    codec => json
  }
}
output {
  elasticsearch {
    hosts => ["xx.xx.xx.xx:9200"]
    manage_template => false
    index => "djangologs"
  }
}

If you are using google cloud or AWS, open port/update firewall rules.

Centralised logging for Django, Gunicorn and Celery using ELK Stack, Sending Django Logs to Logstash logs to localhost:5959 but logstash is still not configured to read the input and transfer the logs to elasticsearch. Run gunicorn server with --log-config path/to/gunicorn.conf argument. By default, this config sends messages from the django logger of level INFO or higher to the console. This is the same level as Django’s default logging config, except that the default config only displays log records when DEBUG=True. Django does not log many such INFO level messages.

I faced the same issue but after scrambling around a lot, I made it work by changing django.request to django.server. I got the idea when I used only django as a logger name in the python code and then found out the actual logger_name from log data stored in elasticsearch. Below is the updated code

LOGGING = {
  'version': 1,
  'handlers': {
        'logstash': {
            'level': 'INFO',
            'class': 'logstash.TCPLogstashHandler',
            'host': 'xx.xx.xx.xx',
            'port': 5959, # Default value: 5959
            'version': 0, # Version of logstash event schema. Default value: 0 (for backward compatibility of the library)
            'message_type': 'django',  # 'type' field in logstash message. Default value: 'logstash'.
            'fqdn': True, # Fully qualified domain name. Default value: false.
            'tags': ['django.request'], # list of tags. Default: None.
        },
  },
  'loggers': {
        'django.server': {  # Here is the change
            'handlers': ['logstash'],
            'level': 'DEBUG',
      }
  },
}

Refer to this for more detail on django logging https://docs.djangoproject.com/en/1.11/topics/logging/#id3

Using Django with Elasticsearch, Logstash, and Kibana (ELK Stack , Sending Django Logs to Logstash; Indexing existing database to Start Django server and browse a page that raises some error or make use� filter { if[type] == "django" { grok { patterns_dir => ["/opt/logstash/patterns/"] match => [ "message" , "%{LOGLEVEL:loglevel} %{DJANGOTIMESTAMP:timestamp},{INT:pid

python - Django server not sending logs to logstash, i using elk stack centralised logging django server. elk stack on remote server , logstash.conf looks this input { tcp { port => 5959 codec => json� Before you create the Logstash pipeline, you’ll configure Filebeat to send log lines to Logstash. The Filebeat client is a lightweight, resource-friendly tool that collects logs from files on the server and forwards these logs to your Logstash instance for processing. Filebeat is designed for reliability and low latency.

Django logstash filter - Logstash, HI all, is there an "official" or standard-de-facto filter for logstash that can I've found https://github.com/vklochan/python-logstash but it seems that it's not Regarding sending logs, yes, if the server is down everything is lost. Your app should not be concerned with where its logs go. Instead, it should log everything to the console ( stdout / stderr ) and let the server decide what to do with it from there. Typically this is put in a dedicated (and logrotated) file, captured by the Systemd journal or Docker, sent to a separate service such as ElasticSearch, or some

Django support | APM Python Agent Reference [5.x], For apm-server 6.2+, make sure you use version 2.0 or higher of elastic-apm . in versions previous to Django 2.2, changing this setting will have no effect. To easily send Python logging messages as "error" objects to Elasticsearch, we� We previously wrote about how to do parse nginx logs using Beats by itself without Logstash. You might wonder why you need both. The answer it Beats will convert the logs to JSON, the format required by ElasticSearch, but it will not parse GET or POST message field to the web server to pull out the URL, operation, location, etc.

Comments
  • Are you sure that "django.request" doesn't filter out all events?
  • Django request is supposed to send all the requests logs
  • Have you tried change the fqdn to False?
  • Yep tried but it did not work as well
  • Your point seems to be correct so what would you suggest for solving this?
  • I suggested to use level INFO. Depending on how log messages are created, sometimes it's good to be able to switch DEBUG logging on and off depending which environment is used. For instance django uses it for mail_admins.
  • @ArpitSolanki django.request logging just WARNING and ERROR messages. docs.djangoproject.com/en/1.11/topics/logging/…
  • @DanielBackman Thanks for your effort but it did not solve the problem. I can still award you the bounty if you add a small note to your answer that it did not solve the problem but helped in logging part of the code
  • @ArpitSolanki I'll try to find why the logger is not working as expected.