Apache Airflow: Executor reports task instance finished (failed) although the task says its queued

Related searches

Our airflow installation is using CeleryExecutor. The concurrency configs were

# The amount of parallelism as a setting to the executor. This defines
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 16

# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 16

# Are DAGs paused by default at creation
dags_are_paused_at_creation = True

# When not using pools, tasks are run in the "default pool",
# whose size is guided by this config element
non_pooled_task_slot_count = 64

# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 16
[celery]
# This section only applies if you are using the CeleryExecutor in
# [core] section above

# The app name that will be used by celery
celery_app_name = airflow.executors.celery_executor

# The concurrency that will be used when starting workers with the
# "airflow worker" command. This defines the number of task instances that
# a worker will take, so size up your workers based on the resources on
# your worker box and the nature of your tasks
celeryd_concurrency = 16

We have a dag that executes daily. It has around some tasks in parallel following a pattern that senses whether the data exists in hdfs then sleep 10 mins, and finally upload to s3.

Some of the tasks has been encountering the following error:

2019-05-12 00:00:46,212 ERROR - Executor reports task instance <TaskInstance: example_dag.task1 2019-05-11 04:00:00+00:00 [queued]> finished (failed) although the task says its queued. Was the task killed externally?
2019-05-12 00:00:46,558 INFO - Marking task as UP_FOR_RETRY
2019-05-12 00:00:46,561 WARNING - section/key [smtp/smtp_user] not found in config

This kind of error occurs randomly in those tasks. When this error happens, the state of task instance is immediately set to up_for_retry, and no logs in the worker nodes. After some retries, they execute and finished eventually.

This problem sometimes gives us large ETL delay. Anyone knows how to solve this problem?

We were facing similar problems , which was resolved by

"-x, --donot_pickle" option.

For more information :- https://airflow.apache.org/cli.html#backfill

Apache Airflow: Executor reports task instance finished (failed , When this error happens, the state of task instance is immediately set to up_for_retry, and no logs in the worker nodes. After some retries, they execute and finished eventually. This problem sometimes gives us large ETL delay. jobs.py:1484 ERROR - Executor reports task instance <TaskInstance: X 2019-07-29 00:00:00+00:00 [queued] > finished (success) although the task says its queued. Was the task killed externally? ``` And looks like this is triggering also thousand of daily emails because the flag to send email in case of failure is set to True. I have Airflow setup

I was seeing very similar symptoms in my DagRuns. I thought it was due to the ExternalTaskSensor and concurrency issues given the queuing and killed task language that looked like this: Executor reports task instance <TaskInstance: dag1.data_table_temp_redshift_load 2019-05-20 08:00:00+00:00 [queued]> finished (failed) although the task says its queued. Was the task killed externally? But when I looked at the worker logs, I saw there was an error caused by setting a variable with Variable.set in my DAG. The issue is described here duplicate key value violates unique constraint when adding path variable in airflow dag where the scheduler polls the dagbag at regular intervals to refresh any changes dynamically. The error with every heartbeat was causing significant ETL delays.

Are you performing any logic in your wh_hdfs_to_s3 DAG (or others) that might be causing errors or delays / these symptoms?

[#AIRFLOW-5071] Thousand os Executor reports task instance X , AIRFLOW-5071. Thousand os Executor reports task instance X finished (success ) although the task says its queued. And looks like this is triggering also thousand of daily emails because the flag to send email in case of failure is set to True. If task retry interval is very small like 30 seconds than scheduler might schedule the next retry run while the previous task run result is still in the executor event queue. Current task run might be in queued state while scheduler is processing the executor's previous events Which might make scheduler to fail the current run because of following code in the jobs.py file

We fixed this already. Let me answer myself question:

We have 5 airflow worker nodes. After installing flower to monitor the tasks distributed to these nodes. We found out that the failed task was always sent to a specific node. We tried to use airflow test command to run the task in other nodes and they worked. Eventually, the reason was a wrong python package in that specific node.

[#AIRFLOW-2275] Scheduler errors for running tasks, I'm getting scheduler errors for tasks that are still running (and complete successfully, {jobs.py:1430} ERROR - Executor reports task instance < TaskInstance: 18:56:00+00:00 [queued]> finished (failed) although the task says its queued. Finally tracked this one down! Closes #1199 When tasks run, they try to catch any errors and, if there are any, mark themselves failed. In addition, SequentialExecutor and LocalExecutor both use subprocess.Popen to run Airflow commands, then wait() for the command to return.

Dear Airflow maintainers, This is a trivial fix to an incorrectly-formatted log message that we have been seeing in our logs. Please accept it without the usual JIRA + tests + etc due to its triviality.

With the system otherwise untouched, the same DAG is now failing 100% of the time roughly after the long-running task hits the 1 hour mark (though oddly, not exactly 3600 seconds later - it can be anywhere from 30 to 90 seconds after the hour ticks) with the message "Executor reports task instance finished (failed) although the task says its running.

[2017-08-31 11:23:44,025] {jobs.py:1729} ERROR - Executor reports task instance <TaskInstance: dag_name.task_name 2017-08-30 02:00:00 [running]> finished (failed) although the task says its running. Was the task killed externally? [2017-08-31 11:23:44,025] {models.py:1427} ERROR - Executor reports task instance <TaskInstance: analytics_events

Comments
  • I have this same problem too. This was a simple DAG. How did you fix it?
  • @alltej I provided my answer below
  • We are not using the backfill command. The problem is in our airflow ETL
  • Why do you think this solved the issue? I'm trying to understand the issue better, we're experiencing it as well.
  • i am having this problem too. what is the fix? i tried to create DAG with different dag_id and task_id hoping it it got stuck by some dag_id or task_id but I still see the problem with the different dag_id or task_id.
  • I'm facing similar problems . Could you point out which package coused it ?