GitHub Status - Incident History https://www.githubstatus.com Statuspage Thu, 15 May 2025 03:15:55 +0000 Disruption with Gemini 2.5 Pro model <p><small>May <var data-var='date'>15</var>, <var data-var='time'>01:02</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>May <var data-var='date'>15</var>, <var data-var='time'>01:02</var> UTC</small><br><strong>Update</strong> - We have received confirmation from our upstream provider that the issue has been resolved. We are seeing significant recovery. The Gemini 2.5 Pro model is now fully available in Copilot Chat, VS Code, and other Copilot products.</p><p><small>May <var data-var='date'>14</var>, <var data-var='time'>21:18</var> UTC</small><br><strong>Update</strong> - We continue experiencing degraded availability for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. We are working closely with our upstream provider to resolve this issue.</p><p><small>May <var data-var='date'>14</var>, <var data-var='time'>16:46</var> UTC</small><br><strong>Update</strong> - We continue experiencing degraded availability for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.</p><p><small>May <var data-var='date'>14</var>, <var data-var='time'>16:01</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Gemini 2.5 Pro model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>May <var data-var='date'>14</var>, <var data-var='time'>15:14</var> UTC</small><br><strong>Update</strong> - We keep investigating issues with Gemini 2.5 Pro model which is in Public Preview. Users may see intermittent errors with this model.</p><p><small>May <var data-var='date'>14</var>, <var data-var='time'>14:39</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Thu, 15 May 2025 01:02:54 +0000 https://www.githubstatus.com/incidents/kpv13bbn84n5 https://www.githubstatus.com/incidents/kpv13bbn84n5 Codespaces Scheduled Maintenance <p><small>May <var data-var='date'>14</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>May <var data-var='date'>13</var>, <var data-var='time'>19:52</var> UTC</small><br><strong>Scheduled</strong> - Codespaces will be undergoing global maintenance from 16:30 UTC on Wednesday, May 14 to 16:30 UTC on Thursday, May 15. Maintenance will begin in our Europe, Asia, and Australia regions. Once it is complete, maintenance will start in our US regions. Each batch of regions will take approximately three to four hours to complete.<br /><br />During this time period, users may experience intermittent connectivity issues when creating new Codespaces or accessing existing ones.<br /><br />To avoid disruptions, ensure that any uncommitted changes are committed and pushed before the maintenance starts. Codespaces with uncommitted changes will remain accessible as usual after the maintenance is complete.</p> Wed, 14 May 2025 16:30:22 +0000 Thu, 15 May 2025 16:30:00 +0000 https://www.githubstatus.com/incidents/bs901hhxgw33 https://www.githubstatus.com/incidents/bs901hhxgw33 Disruption with some GitHub services <p><small>May <var data-var='date'>12</var>, <var data-var='time'>15:06</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>May <var data-var='date'>12</var>, <var data-var='time'>14:53</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Mon, 12 May 2025 15:06:02 +0000 https://www.githubstatus.com/incidents/gxf0jzns6rn2 https://www.githubstatus.com/incidents/gxf0jzns6rn2 Codespaces Scheduled Maintenance <p><small>May <var data-var='date'> 8</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>May <var data-var='date'> 7</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>May <var data-var='date'> 6</var>, <var data-var='time'>19:29</var> UTC</small><br><strong>Scheduled</strong> - Codespaces will be undergoing global maintenance from 16:30 UTC on Wednesday, May 7 to 16:30 UTC on Thursday, May 8. Maintenance will begin in our Europe, Asia, and Australia regions. Once it is complete, maintenance will start in our US regions. Each batch of regions will take approximately three to four hours to complete.<br /><br />During this time period, users may experience intermittent connectivity issues when creating new Codespaces or accessing existing ones.<br /><br />To avoid disruptions, ensure that any uncommitted changes are committed and pushed before the maintenance starts. Codespaces with uncommitted changes will remain accessible as usual after the maintenance is complete.</p> Thu, 08 May 2025 16:30:21 +0000 Thu, 08 May 2025 16:30:00 +0000 https://www.githubstatus.com/incidents/hkp6z7kt2qm6 https://www.githubstatus.com/incidents/hkp6z7kt2qm6 Incident with Git Operations <p><small>May <var data-var='date'> 8</var>, <var data-var='time'>16:27</var> UTC</small><br><strong>Resolved</strong> - On May 8, 2025, between 14:40 UTC and 16:27 UTC the Git Operations service was degraded causing some pushes and merges to fail. On average, the error rate was 1.4% with a peak error rate of 2.24%. This was due to a configuration change which unexpectedly led a critical service to shut down on a subset of hosts that store repository data.<br /><br />We mitigated the incident by re-deploying the affected service to restore its functionality.<br /><br />In order to prevent similar incidents from happening again, we identified the cause that triggered this behavior and mitigated it for future deployments. Additionally, to reduce time to detection we will improve monitoring of the impacted service.</p><p><small>May <var data-var='date'> 8</var>, <var data-var='time'>16:18</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>May <var data-var='date'> 8</var>, <var data-var='time'>16:12</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>May <var data-var='date'> 8</var>, <var data-var='time'>16:03</var> UTC</small><br><strong>Update</strong> - We have identified the issue and applied mitigations, and are monitoring for recovery.</p><p><small>May <var data-var='date'> 8</var>, <var data-var='time'>15:23</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>May <var data-var='date'> 8</var>, <var data-var='time'>15:20</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Git Operations and Pull Requests</p> Thu, 08 May 2025 16:27:56 +0000 https://www.githubstatus.com/incidents/42gtccf6dd98 https://www.githubstatus.com/incidents/42gtccf6dd98 Issue Attachments Failing to Upload <p><small>May <var data-var='date'> 1</var>, <var data-var='time'>23:13</var> UTC</small><br><strong>Resolved</strong> - On May 1, 2025 from 22:09 UTC to 23:13 UTC, the Issues service was degraded and users weren't able to upload attachments. The root cause was identified to be a new feature which added a custom header to all client-side HTTP requests, causing a CORS errors when uploading attachments to our provider.<br /><br />We mitigated the incident by rolling back the feature flag that added the new header at 22:56 UTC. In order to prevent this from happening again, we are adding new metrics to monitor and ensure the safe rollout of changes to client-side requests.</p><p><small>May <var data-var='date'> 1</var>, <var data-var='time'>23:13</var> UTC</small><br><strong>Update</strong> - We have identified the underlying cause of attachment upload failures to Issues and mitigated it by rolling back a feature flag. If you are still experiencing failures when uploading attachments to Issues, please reload your page.</p><p><small>May <var data-var='date'> 1</var>, <var data-var='time'>22:29</var> UTC</small><br><strong>Update</strong> - We are investigating attachment upload failures on Issues. We will continue to keep users updated on progress towards mitigation.</p><p><small>May <var data-var='date'> 1</var>, <var data-var='time'>22:28</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Issues</p> Thu, 01 May 2025 23:13:18 +0000 https://www.githubstatus.com/incidents/9zmv2mqqxsbm https://www.githubstatus.com/incidents/9zmv2mqqxsbm Disruption with Pull Request Ref Updates <p><small>Apr <var data-var='date'>30</var>, <var data-var='time'>21:05</var> UTC</small><br><strong>Resolved</strong> - On April 30, 2025, between 8:02 UTC and 9:05 UTC, the Pull Requests service was degraded and failed to update refs for repositories with higher traffic. This was due to a repository migration creating a larger than usual number of enqueued jobs. This resulted in an increase in job failures, delays for non-migration sourced jobs, and delays to tracking refs.<br /><br />We declared an incident once we confirmed that this issue was not isolated to the migrating repository and other repositories were also failing to process ref updates.<br /><br />We mitigated the incident by shifting the migration jobs to a different job queue.<br /><br />To avoid problems like this in the future, we are revisiting our repository migration process and are working to isolate potentially problematic migration workloads from non-migration workloads.</p><p><small>Apr <var data-var='date'>30</var>, <var data-var='time'>20:53</var> UTC</small><br><strong>Update</strong> - Some customers of github.com are reporting issues with PR tracking refs not being updated due to processing delays and increased failure rates. We're investigating the source of the issue.</p><p><small>Apr <var data-var='date'>30</var>, <var data-var='time'>20:51</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p> Wed, 30 Apr 2025 21:05:36 +0000 https://www.githubstatus.com/incidents/7r0jg1plygfm https://www.githubstatus.com/incidents/7r0jg1plygfm Delays for web and email notification delivery <p><small>Apr <var data-var='date'>29</var>, <var data-var='time'>12:52</var> UTC</small><br><strong>Resolved</strong> - On April 29th, 2025, between 8:40am UTC and 12:50pm UTC the notifications service was degraded and stopped delivering most web and email notifications as well as some mobile push notifications. This was due to a large and faulty schema migration that rendered a set of database primaries unhealthy, affecting the notification delivery pipelines, causing delays in the most of the web and email notification deliveries.<br /><br />We mitigated the incident by stopping the migration and promoting replicas to replace the unhealthy primaries.<br /><br />In order to prevent similar incidents in the future, we are addressing the underlying issues in the online schema tooling and improving the way we interact with the database to not be disruptive to production workloads.</p><p><small>Apr <var data-var='date'>29</var>, <var data-var='time'>12:52</var> UTC</small><br><strong>Update</strong> - The notification delivery backlog has been processed and notifications are now being delivered as expected.</p><p><small>Apr <var data-var='date'>29</var>, <var data-var='time'>12:45</var> UTC</small><br><strong>Update</strong> - New notification deliveries are occuring in a timely manner and we have processed a significant portion of the backlog. Users may still notice delayed delivery of some older notifications.</p><p><small>Apr <var data-var='date'>29</var>, <var data-var='time'>11:57</var> UTC</small><br><strong>Update</strong> - Web and email notifications continue to be delivered successfully and the service is in a healthy state. We are processing the backlog of notification deliveries which are currently as much as 30-60 minutes delayed.</p><p><small>Apr <var data-var='date'>29</var>, <var data-var='time'>11:09</var> UTC</small><br><strong>Update</strong> - We are starting to see signals of recovery with delayed web/email notifications now being dispatched.<br /><br />The team continue to monitor recovery and ensure return to normal service.</p><p><small>Apr <var data-var='date'>29</var>, <var data-var='time'>10:37</var> UTC</small><br><strong>Update</strong> - We are seeing impact on both web and email notifications with most customers seeing delayed deliveries. <br /><br />The last incident updated regarding impact on email notifications was incorrect. Email notifications have been experiencing the same delays as web notifications for duration of incident.<br /><br />We have applied changes to our system and are monitoring to see if these restore normal service. Updates to follow.</p><p><small>Apr <var data-var='date'>29</var>, <var data-var='time'>10:07</var> UTC</small><br><strong>Update</strong> - Web notifications are experiencing delivery delays for the majority of customers. We are working to mitigate impact and restore delivery times back within normal operating bounds.<br /><br />Email notifications remain unaffected and are delivering as normal.<br /><br />We will provide futher updates as we have more information.</p><p><small>Apr <var data-var='date'>29</var>, <var data-var='time'>10:05</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Tue, 29 Apr 2025 12:52:28 +0000 https://www.githubstatus.com/incidents/jvyb8cp9h0lh https://www.githubstatus.com/incidents/jvyb8cp9h0lh Incident with Git Operations, API Requests and Issues <p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>11:09</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>10:35</var> UTC</small><br><strong>Update</strong> - We are seeing signs of recovery and continue to monitor latency.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>09:52</var> UTC</small><br><strong>Update</strong> - We continue to investigate impact to Issues and Pull Requests. Customers may see some timeouts as we work towards mitigation.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>08:58</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate impact to Issues and Pull Requests. We will provide more updates as we have them.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>08:23</var> UTC</small><br><strong>Update</strong> - Users may see timeouts when viewing Pull Requests. We are still investigating the issues related to Issues and Pull Requests and will provide further updates as soon as we can</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>08:21</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>08:05</var> UTC</small><br><strong>Update</strong> - Issues API is currently seeing elevated latency. We are investigating the issue and will provide further updates as soon as we have them.</p><p><small>Apr <var data-var='date'>28</var>, <var data-var='time'>08:03</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for API Requests, Git Operations and Issues</p> Mon, 28 Apr 2025 11:09:10 +0000 https://www.githubstatus.com/incidents/7lbnm9mg6549 https://www.githubstatus.com/incidents/7lbnm9mg6549 Disruption with some GitHub services <p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>22:20</var> UTC</small><br><strong>Resolved</strong> - Starting at 19:13:50 UTC, the service responsible for importing Git repositories began experiencing errors that impacted both GitHub Enterprise Importer migrations and the GitHub Importer which were restored at 22:11:00 UTC. At the time, 837 migrations across 57 organizations were affected. Impacted migrations would have shown the error message "Git source migration failed. Error message: An error occurred. Please contact support for further assistance." in the migration logs and required a retry.<br /><br />The root cause of the issue was a recent configuration change that caused our workers, responsible for syncing the Git repository, to lose the necessary access required for the migration. We were able to retrieve the needed access for the workers , and all dependent services resumed normal operation.<br />We’ve identified and implemented additional safeguards to help prevent similar disruptions in the future.<br /></p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>21:44</var> UTC</small><br><strong>Update</strong> - We are investigating issues with GitHub Enterprise Importer. We will continue to keep users updated on progress towards mitigation.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>21:38</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Wed, 23 Apr 2025 22:20:46 +0000 https://www.githubstatus.com/incidents/7hdyxpt7sdzm https://www.githubstatus.com/incidents/7hdyxpt7sdzm Incident with Issues, API Requests and Pages <p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>08:00</var> UTC</small><br><strong>Resolved</strong> - On April 23, 2025, between 07:00 UTC and 07:20 UTC, multiple GitHub services experienced degradation caused by resource contention on database hosts. The resulting error rates, which ranged from 2–5% of total requests, led to intermittent service disruption for users. The issue was triggered by heavy workloads on the database leading to connection saturation.<br /><br />The incident mitigated when the database throttling activated which allowed the system to rebalance the connections. This restored the traffic flow to the database and restored service functionality.<br /><br />To prevent similar issues in the future, we are reviewing the capacity of the database, improving monitoring and alerting systems, and implementing safeguards to reduce time to detection and mitigation.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:47</var> UTC</small><br><strong>Update</strong> - A brief problem with one of our database clusters caused intermittent errors around 07:05 UTC for a few minutes. Our systems have recovered and we continue to monitor.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:43</var> UTC</small><br><strong>Update</strong> - Issues is operating normally.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:42</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:42</var> UTC</small><br><strong>Update</strong> - Pages is operating normally.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:42</var> UTC</small><br><strong>Update</strong> - API Requests is operating normally.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:41</var> UTC</small><br><strong>Update</strong> - Codespaces is operating normally.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:23</var> UTC</small><br><strong>Update</strong> - Codespaces is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:22</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'>23</var>, <var data-var='time'>07:17</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for API Requests, Issues and Pages</p> Wed, 23 Apr 2025 08:00:06 +0000 https://www.githubstatus.com/incidents/lrby2hyk80dh https://www.githubstatus.com/incidents/lrby2hyk80dh Codespaces Scheduled Maintenance <p><small>Apr <var data-var='date'>22</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Apr <var data-var='date'>21</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'>17</var>, <var data-var='time'>15:30</var> UTC</small><br><strong>Scheduled</strong> - Codespaces will be undergoing global maintenance from 16:30 UTC on Monday, April 21 to 16:30 UTC on Tuesday, April 22. Maintenance will begin in our Europe, Asia, and Australia regions. Once it is complete, maintenance will start in our US regions. Each batch of regions will take approximately three to four hours to complete.<br /><br />During this time period, users may experience intermittent connectivity issues when creating new Codespaces or accessing existing ones.<br /><br />To avoid disruptions, ensure that any uncommitted changes are committed and pushed before the maintenance starts. Codespaces with uncommitted changes will remain accessible as usual after the maintenance is complete.</p> Tue, 22 Apr 2025 16:30:21 +0000 Tue, 22 Apr 2025 16:30:00 +0000 https://www.githubstatus.com/incidents/znhjr4vqqcdw https://www.githubstatus.com/incidents/znhjr4vqqcdw Disruption with some GitHub services <p><small>Apr <var data-var='date'>17</var>, <var data-var='time'>17:35</var> UTC</small><br><strong>Resolved</strong> - On April 15th during regular testing we found a bug in our Copilot Metrics Pipeline infrastructure causing some data used to aggregate Copilot usage for the Copilot Metrics API to not be ingested. As a result of the bug, customer metrics in the Copilot Metrics API would have indicated lower than expected Copilot usage for the previous 28 days.<br />To mitigate the incident we resolved the bug so that all data from April 14th onwards would be accurately calculated and immediately began backfilling the previous 28 days with the correct data. All data has been corrected as of 2025-04-17 5:34PM UTC.<br />We have added additional monitoring to catch similar pipeline failures in the future earlier and are working on enhancing our data validation to ensure that all metrics we provide are accurate.</p><p><small>Apr <var data-var='date'>17</var>, <var data-var='time'>17:34</var> UTC</small><br><strong>Update</strong> - We have resolved issues with data inconsistency for Copilot Metrics API data as of April 17th 2025 1600 UTC. All data is now accurate.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>22:44</var> UTC</small><br><strong>Update</strong> - We are continuing to work on correcting the Copilot Metrics API data from March 19th 2025 to April 14th 2025. Data from April 15 and later is accurate. Currently, the API returns about 10% lower usage numbers. Based on the current investigations we estimate to have a resolution by April 18th 0100 hrs UTC. We will provide an update if there is change in the ETA.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>05:11</var> UTC</small><br><strong>Update</strong> - We have an updated ETA on correcting all Copilot metrics API data: 20 hours. We won't post more updates here unless the ETA changes.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>01:46</var> UTC</small><br><strong>Update</strong> - We are working on correcting the Copilot metrics API source data from March 19th to April 14th. Currently, the API returns about 10% lower usage numbers than the reality. We don't have an ETA for the resolution at the moment.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>00:41</var> UTC</small><br><strong>Update</strong> - The Copilot metrics API (https://docs.github.com/en/enterprise-cloud@latest/rest/copilot/copilot-metrics?apiVersion=2022-11-28) now returns accurate data for April 15th. We're working on correcting the past 27 days, as we are under-reporting certain metrics from this time.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>23:33</var> UTC</small><br><strong>Update</strong> - We'll have accurate data for April 15th in the next 60 minutes. We're still working on correcting the data for the additional 27 days before April 15th. The complete correction is estimated to take up to 7 days, but we're working to speed this up.<br /><br />https://docs.github.com/en/enterprise-cloud@latest/rest/copilot/copilot-metrics?apiVersion=2022-11-28 is the specific impacted API.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>21:45</var> UTC</small><br><strong>Update</strong> - As we've made further progress on correcting the inconsistencies, we estimate it will take approximately a week for a full recovery. We are investigating options for speeding up the recovery, and we appreciate your patience as we work through this incident.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>19:02</var> UTC</small><br><strong>Update</strong> - We are working on correcting the inconsistencies now, our next update we will provide an estimated time when the issue will be fully resolved.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>18:20</var> UTC</small><br><strong>Update</strong> - We are currently experiencing degraded performance with our Copilot metrics API, which is temporarily causing partial inconsistencies in the data returned. Our engineering teams are actively working to restore full functionality. We understand the importance of timely updates and are prioritizing a resolution to ensure all systems are operating normally as quickly as possible.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>18:20</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Thu, 17 Apr 2025 17:35:02 +0000 https://www.githubstatus.com/incidents/jmfzh1p2yrg2 https://www.githubstatus.com/incidents/jmfzh1p2yrg2 Incident with Pull Requests <p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>17:26</var> UTC</small><br><strong>Resolved</strong> - On April 16, 2025 between 3:22:36 PM UTC and 5:26:55 PM UTC the Pull Request service was degraded. On average, 0.7% of page views were affected. This primarily affected logged-out users, but some logged-in users were affected as well. <br /><br />This was due to an error in how certain Pull Request timeline events were rendered, and we resolved the incident by updating the timeline event code.<br /><br />We are enhancing test coverage to include additional scenarios and piloting new tools to prevent similar incidents in the future.<br /></p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>17:26</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>17:13</var> UTC</small><br><strong>Update</strong> - The fix is rolling out and we're seeing recovery for users encountering 500 errors when viewing a pull request.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>16:17</var> UTC</small><br><strong>Update</strong> - The fix is currently being deployed, we anticipate this to be fully mitigated in approximately thirty minutes.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>15:55</var> UTC</small><br><strong>Update</strong> - Users may experience 500 errors when viewing a PR. Most of the impact is limited to anonymous access there is a small handful of logged in users who are also experiencing this. We have the fix prepared and it will be deployed soon.</p><p><small>Apr <var data-var='date'>16</var>, <var data-var='time'>15:22</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p> Wed, 16 Apr 2025 17:26:58 +0000 https://www.githubstatus.com/incidents/jhs2z7c69yfd https://www.githubstatus.com/incidents/jhs2z7c69yfd Disruption with some GitHub services for Safari Users <p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>14:12</var> UTC</small><br><strong>Resolved</strong> - On April 15, 2025 from 12:45 UTC to 13:56 UTC, access to GitHub.com was restricted for logged out users using WebKit-based browsers, such as Safari and various mobile browsers. During the impacting time, roughly 6.6M requests were unsuccessful.<br /><br />This issue was caused by a configuration change intended to improve our handling of large traffic spikes but was improperly targeted at too large a set of requests.<br /><br />To prevent future incidents like this, we are improving how we operationalize these types of changes, adding additional tools for validating what will be impacted by such changes, and reducing the likelihood of manual mistakes through automated detection and handling of such spikes.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>14:11</var> UTC</small><br><strong>Update</strong> - Safari users are now able to access GitHub.com.<br /><br />The fix has been rolled out to all environments.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>13:59</var> UTC</small><br><strong>Update</strong> - Most unauthenticated Safari users should now be able to access github.com. We are ensuring the fix is deployed out to all environments.<br /><br />Next update in 30m.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>13:46</var> UTC</small><br><strong>Update</strong> - We have identified the cause of the restriction for Safari users and are deploying a fix. Next update in 15 minutes.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>13:30</var> UTC</small><br><strong>Update</strong> - Some unauthenticated Safari users are seeing the message "Access to this site has been restricted." We are currently investigating this behavior.</p><p><small>Apr <var data-var='date'>15</var>, <var data-var='time'>13:30</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Tue, 15 Apr 2025 14:12:03 +0000 https://www.githubstatus.com/incidents/0r3t35cbghdy https://www.githubstatus.com/incidents/0r3t35cbghdy [Retroactive] Access from China temporarily blocked for users that were not logged in <p><small>Apr <var data-var='date'>12</var>, <var data-var='time'>20:00</var> UTC</small><br><strong>Resolved</strong> - Due to a configuration change with unintended impact, some users that were not logged in who tried to visit GitHub.com from China were temporarily unable to access the site. For users already logged in, they could continue to access the site successfully. Impact started 2025/04/12 at 20:01 UTC. Impact was mitigated 2025/04/13 at 14:55 UTC. During this time, up to 4% of all anonymous requests originating from China were unsuccessful.<br /><br />The configuration changes that caused this impact have been reversed and users should no longer see problems when trying to access GitHub.com.</p> Sat, 12 Apr 2025 20:00:00 +0000 https://www.githubstatus.com/incidents/jfvgcls9swln https://www.githubstatus.com/incidents/jfvgcls9swln Incident with Codespaces <p><small>Apr <var data-var='date'>11</var>, <var data-var='time'>00:51</var> UTC</small><br><strong>Resolved</strong> - On April 11 from 3:05am UTC to 3:44am UTC, approximately 75% of Codespaces users faced create and start failures. These were caused by manual configuration changes to an internal dependency. We reverted the changes and immediately restored service health.<br /><br />We are working on safer mechanisms for testing and rolling out such configuration changes, and we expect no further disruptions.</p><p><small>Apr <var data-var='date'>11</var>, <var data-var='time'>00:50</var> UTC</small><br><strong>Update</strong> - We have reverted a problematic configuration change and are seeing recovery across starts and resumes</p><p><small>Apr <var data-var='date'>11</var>, <var data-var='time'>00:44</var> UTC</small><br><strong>Update</strong> - We have identified an issue that is causing errors when starting new and resuming existing Codespaces. We are currently working on a mitigation</p><p><small>Apr <var data-var='date'>11</var>, <var data-var='time'>00:28</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Codespaces</p> Fri, 11 Apr 2025 00:51:49 +0000 https://www.githubstatus.com/incidents/6kf8j273wf0l https://www.githubstatus.com/incidents/6kf8j273wf0l Disruption with some Pull Requests stuck in processing state <p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>00:39</var> UTC</small><br><strong>Resolved</strong> - On April 9, 2025, between 11:27 UTC and 12:39 UTC, the Pull Requests service was degraded and experienced delays in processing updates. At peak, approximately 1–1.5% of users were affected by delays in synchronizing pull requests. During this period, users may have seen a "Processing updates" message in their pull requests after pushing new commits, and the new commits did not appear in the Pull Request view as expected. The Pull Request synchronization process has automatic retries and most delays were automatically resolved. Any Pull Requests that were not resynchronized during this window were manually synchronized on Friday, April 11 at 14:23 UTC.<br /><br />This was due to a misconfigured GeoIP lookup file that our routine GitHub operations depended on and led to background job processing to fail. <br /><br />We mitigated the incident by reverting to a known good version of the GeoIP lookup file on affected hosts.<br /><br /><br />We are working to enhance our CI testing and automation by validating GeoIP metadata to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>00:36</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>00:19</var> UTC</small><br><strong>Update</strong> - The team has identified a mitigation and is rolling it out while actively monitoring recovery</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>23:39</var> UTC</small><br><strong>Update</strong> - Some users are experiencing delays in pull request updates. After pushing new commits, PRs show a "Processing updates" message, and the new commits do not appear in the pull request view.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>23:36</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>23:27</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Thu, 10 Apr 2025 00:39:04 +0000 https://www.githubstatus.com/incidents/vxfd5br11t6v https://www.githubstatus.com/incidents/vxfd5br11t6v Incident with Pull Requests <p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>09:31</var> UTC</small><br><strong>Resolved</strong> - On April 9, 2025, between 7:01 UTC and 9:31 UTC, the Pull Requests service was degraded and failed to update refs for repositories with higher traffic. This was due to a repository migration creating a larger than usual number of enqueued jobs. This resulted in an increase in job failures and delays for non-migration sourced jobs.<br /><br />We declared an incident once we confirmed that this issue was not isolated to the migrating repository and other repositories were also failing to process ref updates.<br /><br />We mitigated the incident by shifting the migration jobs to a different job queue. <br /><br />To avoid problems like this in the future, we are revisiting our repository migration process and are working to isolate potentially problematic migration workloads from non-migration workloads.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>09:28</var> UTC</small><br><strong>Update</strong> - We saw a period of delays on Pull request experiences. The impact is over at the moment, but we are investigating to prevent a repeat.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>09:00</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p> Wed, 09 Apr 2025 09:31:53 +0000 https://www.githubstatus.com/incidents/mb56b2qv4pyz https://www.githubstatus.com/incidents/mb56b2qv4pyz Vision requests are unavailable for certain models on Copilot Chat on github.com <p><small>Apr <var data-var='date'> 8</var>, <var data-var='time'>18:21</var> UTC</small><br><strong>Resolved</strong> - On 2025-04-08, between 00:42 and 18:05 UTC, as we rolled out an updated version of our GPT 4o model, we observed that vision capabilities for GPT-4o for Copilot Chat in GitHub were intermittently unavailable. During this period, customers may have been unable to upload image attachments to Copilot Chat in GitHub. <br /><br />In response, we paused the rollout at 18:05 UTC. Recovery began immediately and telemetry indicates that the issue was fully resolved by 18:21 UTC. <br /><br />Following this incident, we have identified areas of improvements in our model rollout process, including enhanced monitoring and expanded automated and manual testing of our end-to-end capabilities.</p><p><small>Apr <var data-var='date'> 8</var>, <var data-var='time'>18:20</var> UTC</small><br><strong>Update</strong> - The issue has been resolved now, and we're actively monitoring the service for any further issues.</p><p><small>Apr <var data-var='date'> 8</var>, <var data-var='time'>17:58</var> UTC</small><br><strong>Update</strong> - Image attachments are not available for some models on Copilot chat on github.com. The issue has been identified and the fix is in progress.</p><p><small>Apr <var data-var='date'> 8</var>, <var data-var='time'>17:53</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Tue, 08 Apr 2025 18:21:16 +0000 https://www.githubstatus.com/incidents/mdfybxmvbp0z https://www.githubstatus.com/incidents/mdfybxmvbp0z Disruption with some GitHub services <p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>02:31</var> UTC</small><br><strong>Resolved</strong> - On April 7, 2025 between 2:15:37 AM UTC and 2:31:14 AM UTC, multiple GitHub services were degraded. Requests to these services returned 5xx errors at a high rate due to an internal database being exhausted by our Codespaces service. The incident mitigated on its own.<br /><br />We have addressed the problematic queries from the Codespaces service, minimizing the risk of future reoccurrances.</p><p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>02:31</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>02:24</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Apr <var data-var='date'> 7</var>, <var data-var='time'>02:19</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Mon, 07 Apr 2025 02:31:17 +0000 https://www.githubstatus.com/incidents/cjq2q04mfydp https://www.githubstatus.com/incidents/cjq2q04mfydp Disruption with some GitHub services <p><small>Apr <var data-var='date'> 3</var>, <var data-var='time'>19:12</var> UTC</small><br><strong>Resolved</strong> - On 2025-04-03, between 6:13:27 PM UTC and 7:12:00 PM UTC the docs.github.com service was degraded and errored. On average, the error rate was 8% and peaked at 20% of requests to the service. This was due to a misconfiguration and elevated requests.<br />We mitigated the incident by correcting the misconfiguration.<br />We are working to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Apr <var data-var='date'> 3</var>, <var data-var='time'>18:56</var> UTC</small><br><strong>Update</strong> - We are investigating and working on applying mitigations to intermittent unavailability of GitHub's Docs.</p><p><small>Apr <var data-var='date'> 3</var>, <var data-var='time'>18:51</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Thu, 03 Apr 2025 19:12:04 +0000 https://www.githubstatus.com/incidents/6xn7r7hhrq72 https://www.githubstatus.com/incidents/6xn7r7hhrq72 Scheduled Codespaces Maintenance <p><small>Apr <var data-var='date'> 3</var>, <var data-var='time'>17:00</var> UTC</small><br><strong>Completed</strong> - The scheduled maintenance has been completed.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>17:00</var> UTC</small><br><strong>In progress</strong> - Scheduled maintenance is currently in progress. We will provide updates as necessary.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>16:11</var> UTC</small><br><strong>Scheduled</strong> - Codespaces will be undergoing maintenance in all regions starting from 17:00 UTC on Wednesday, April 2 to 17:00 UTC on Thursday, April 3. Maintenance will begin in Southeast Asia, Central India, Australia Central, and Australia East regions. Once it is complete, maintenance will start in UK South and West Europe, followed by East US, East US2, West US2, and West US3. Each batch of regions will take approximately three to four hours to complete.<br /><br />During this time period, users may experience connectivity issues with new and existing Codespaces.<br /><br />If you have uncommitted changes you may need during the maintenance window, you should verify they are committed and pushed before maintenance starts. Codespaces with any uncommitted changes will be accessible as usual once maintenance is complete.</p> Thu, 03 Apr 2025 17:00:22 +0000 Thu, 03 Apr 2025 17:00:00 +0000 https://www.githubstatus.com/incidents/xb49wskhzrm2 https://www.githubstatus.com/incidents/xb49wskhzrm2 Disruption with some GitHub services <p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>20:20</var> UTC</small><br><strong>Resolved</strong> - Between 2025-03-27 12:00 UTC and 2025-04-03 16:00 UTC, the GitHub Enterprise Cloud Dormant Users report was degraded and falsely indicated that dormant users were active within their business. This was due to increased load on a database from a non-performant query.<br /><br />We mitigated the incident by increasing the capacity of the database, and installing monitors for this specific report to improve observability for future. As a long-term solution, we are rewriting the Dormant Users report to optimize how it queries for user activity, which will result in significantly faster and accurate report generation.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>19:09</var> UTC</small><br><strong>Update</strong> - We are aware that the generation of the Dormant Users Report is delayed for some of our customers, and that the resulting report may be inaccurate. We are actively investigating the root cause and a possible remediation.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>19:08</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Wed, 02 Apr 2025 20:20:00 +0000 https://www.githubstatus.com/incidents/d3t3d2jn3b0j https://www.githubstatus.com/incidents/d3t3d2jn3b0j Disruption with some GitHub services <p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>09:29</var> UTC</small><br><strong>Resolved</strong> - On April 1st, 2025, between 08:17:00 UTC and 09:29:00 UTC the data store powering the Audit Log service experienced elevated errors resulting in an approximate 45 minute delay of Audit Log Events. Our systems maintained data continuity and we experienced no data loss. The delay only affected the Audit Log API and the Audit Log user interface. Any configured Audit Log Streaming endpoints received all relevant Audit Log Events. The data store team deployed mitigating actions which resulted in a full recovery of the data store’s availability.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>09:04</var> UTC</small><br><strong>Update</strong> - The Audit Log is experiencing an increase of failed queries due to availability issues with the associated data store. Audit Log data is experiencing a delay in availability. We have identified the issue and we are deploying mitigating measures.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>08:31</var> UTC</small><br><strong>Investigating</strong> - We are currently investigating this issue.</p> Tue, 01 Apr 2025 09:29:15 +0000 https://www.githubstatus.com/incidents/mr6vzllykhmw https://www.githubstatus.com/incidents/mr6vzllykhmw