API incidents

2023-11-24 - Outage of "similar images" functionality

2022-11-24 15:16 [fixed] The issue have been fixed. The outage took approximately 4 hours. We will cover this part of the response with integration tests to mitigate the issue in the future.

2022-11-24 14:53 [investigating] A few minutes ago we received a report that similar images from plant health assessment service are not available in API responses. We confirm this report and are currently in the process of investigation of the cause.

2023-08-09 - Plant.id API outage

2022-08-09 12:15 [resolved] Connectivty is stable again and all services are working correctly.

2022-08-09 11:25 [update] Cause of outage is networking problem of cloud service provider (incident report).

2023-08-09 10:30 [investigating] We are experiencing connectivity issues to DB and our model workers. Most of the identifications are rejected or fail.

2023-05-21 - Slow identification for 0.05% of traffic

2023-05-26 10:01 [investigating] On of our GPU workers sometimes mysteriously freezes a causes a tiny amount of traffic to be inefficiently requeued causing some requests to take much slower than we are used to (sometimes even a minute or two).

Note: The new efficient requeueing system is already developed and we are planning to release it during the summer.

2022-07-07 - Plant.id API outage

2022-07-07 11:18 [update] We switched back to the boosted database and turned on additional notifications to prevent this issue in the future.

2022-07-07 11:04 [resolved] We switched to the recovery database. Plant.id is working again.

2022-07-07 10:20 [update] We are increasing the database space and creating a new recovery database.

2022-07-07 10:01 [investigating] Our database ran out of storage space and switched to the read-only mode. Identification stopped working.

2021-08-19 - Plant.id API outage

2021-08-19 6:09 [resolved] The migration took almost 2 hours.

2021-08-19 4:10 [investigating] Our database node stopped responding. The traffic could not be automatically routed to a dedicated secondary node for reasons we are still investigating. Our Engineering team immediately contacted the cloud provider. We decided to migrate DB to another cluster.