ARTICLE AD BOX
![]()
Amazon acknowledged precocious Monday, March 3, that 2 of its information centers successful the United Arab Emirates (UAE) and a installation successful Bahrain were damaged by drone strikes, taking the facilities offline.
Amazon Web Services (AWS) update astatine 7:19 p.m. EST said that the outages were caused by drone strikes tied to the “ongoing struggle successful the Middle East.” AWS said, “In the UAE, 2 of our facilities were straight struck, portion successful Bahrain, a drone onslaught successful adjacent proximity to 1 of our facilities caused carnal impacts to our infrastructure.” It added, “These strikes person caused structural damage, disrupted powerfulness transportation to our infrastructure, and successful immoderate cases required occurrence suppression activities that resulted successful further h2o damage.”
The institution warned that instability is apt to proceed successful the Middle East, making operations “unpredictable.” AWS added notices to the apical of its marketplaces successful Israel, Saudi Arabia, Kuwait, Bahrain and the UAE alerting customers of an "extended transportation clip successful your area."The incidental occurred March 1 morning, with the institution posting to its Amazon Web Services wellness dashboard astatine the clip that “objects” deed information centers successful the UAE, causing “sparks and fire".
In an update to its AWS wellness dashboard, the institution said that 2 facilities successful the United Arab Emirates were “directly struck” by drones connected Sunday, causing extended damage. A tract successful Bahrain was damaged owed to a drone onslaught that occurred nearby.
Operational contented - Multiple services (UAE); Services impacted: Multiple services; Severity: Disrupted
Mar 02 4:19 PM PST: We are providing an update connected the ongoing work disruptions affecting the AWS Middle East (UAE) Region (ME-CENTRAL-1) and the AWS Middle East (Bahrain) Region (ME-SOUTH-1).
Due to the ongoing struggle successful the Middle East, some affected regions person experienced carnal impacts to infrastructure arsenic a effect of drone strikes. In the UAE, 2 of our facilities were straight struck, portion successful Bahrain, a drone onslaught successful adjacent proximity to 1 of our facilities caused carnal impacts to our infrastructure.
These strikes person caused structural damage, disrupted powerfulness transportation to our infrastructure, and successful immoderate cases required occurrence suppression activities that resulted successful further h2o damage.
We are moving intimately with section authorities and prioritizing the information of our unit passim our betterment efforts.In the ME-CENTRAL-1 (UAE) Region, 2 of our 3 Availability Zones (mec1-az2 and mec1-az3) stay importantly impaired. The 3rd Availability Zone (mec1-az1) continues to run normally, though immoderate services person experienced indirect interaction owed to dependencies connected the affected zones.
In the ME-SOUTH-1 (Bahrain) Region, 1 installation has been impacted. Across some regions, customers are experiencing elevated mistake rates and degraded availability for services including Amazon EC2, Amazon S3, Amazon DynamoDB, AWS Lambda, Amazon Kinesis, Amazon CloudWatch, Amazon RDS, and the AWS Management Console and CLI.
We are moving to reconstruct afloat work availability arsenic rapidly arsenic possible, though we expect betterment to beryllium prolonged fixed the quality of the carnal harm involved.In parallel with efforts to reconstruct the carnal infrastructure astatine the affected sites, we are pursuing aggregate software-based betterment paths that bash not beryllium connected the underlying facilities being afloat brought backmost online. For Amazon S3 and Amazon DynamoDB, we are actively moving to reconstruct information entree and work availability done bundle mitigations, including deploying updates to alteration S3 to run wrong the existent infrastructure constraints and remediating impaired DynamoDB tables to reconstruct work and constitute availability for babelike services.
Our absorption connected restoring these foundational services is deliberate, arsenic betterment of Amazon S3 and Amazon DynamoDB volition successful crook alteration a wide scope of babelike AWS services to recover. For different affected work APIs, we are deploying targeted bundle updates to trim mistake rates and reconstruct functionality wherever possible, autarkic of the carnal betterment timeline. We are besides moving to reconstruct entree to the AWS Management Console and CLI done network-level changes that way postulation distant from the affected infrastructure.
While these software-based mitigations tin code galore of the service-level impacts, immoderate betterment actions are constrained by the carnal authorities of the affected facilities — meaning that afloat restoration of definite services volition necessitate the underlying infrastructure to beryllium repaired and brought backmost online. Across each services, our teams are moving successful parallel connected some the carnal restoration of the affected facilities and these software-based mitigations, with the extremity of restoring arsenic overmuch lawsuit entree arsenic imaginable arsenic rapidly arsenic possible, adjacent up of afloat infrastructure recovery.
In addition, we are prioritizing the restoration of services and tools that alteration customers to backmost up and migrate their information and applications retired of the affected regions.Finally, adjacent arsenic we enactment to reconstruct these facilities, the ongoing struggle successful the portion means that the broader operating situation successful the Middle East remains unpredictable. We urge that customers with workloads moving successful the Middle East see taking enactment present to backup information and perchance migrate your workloads to alternate AWS Regions.
We urge customers workout their catastrophe betterment plans, retrieve from distant backups stored successful different regions, and update their applications to nonstop postulation distant from the affected regions.
For customers requiring guidance connected alternate regions, we urge considering AWS Regions successful the United States, Europe, oregon Asia Pacific, arsenic due for your latency and information residency requirements.We volition proceed to supply updates arsenic betterment progresses and arsenic the concern evolves.
Our adjacent update volition beryllium provided by 9:00 PM PST connected March 2, 2026, oregon sooner if caller accusation becomes available.Mar 02 1:36 PM PST: We proceed to enactment towards betterment of the 2 impaired Availability Zones (mec1-az2 and mec1-az3) successful the ME-CENTRAL-1 Region. We person partially restored entree to the AWS Management Console, however, immoderate pages volition proceed to load unsuccessfully until we person recovered halfway services and power.
In parallel to the powerfulness and betterment efforts, we are moving to reconstruct entree to tools and utilities to let customers to backup and migrate their data.
We person nary updated guidance connected expected betterment times, and inactive expect this to instrumentality astatine slightest a time to afloat reconstruct powerfulness and connectivity. We proceed advising customers enact their catastrophe betterment plans and retrieve from distant backups into alternate AWS Regions.
We volition supply you with different update by 6:00 PM PST, oregon sooner if caller accusation becomes available.Mar 02 9:59 AM PST: We proceed to enactment towards betterment of the 2 impaired Availability Zones (mec1-az2 and mec1-az3) successful the ME-CENTRAL-1 Region. The interaction is causing elevated errors rates for some the Management Console and CLI. Our existent anticipation is that betterment volition instrumentality astatine slightest a time to complete. We proceed to urge customers enact their catastrophe betterment plans and retrieve from distant backups into alternate AWS Regions.
We volition proceed to supply periodic updates connected betterment efforts. Our adjacent update volition beryllium by 2:00 PM PST oregon sooner if caller accusation becomes available.Mar 02 6:22 AM PST: We proceed to enactment towards betterment of the 2 impaired Availability Zones (mec1-az2 and mec1-az3) successful the ME-CENTRAL-1 Region. We are expecting betterment to instrumentality astatine slightest a day, arsenic it requires repair of facilities, cooling and powerfulness systems, coordination with section authorities, and cautious appraisal to guarantee the information of our operators.
EC2, Amazon DynamoDB and different AWS Services proceed to acquisition important mistake rates and elevated latencies.We urge customers enact their catastrophe betterment plans and retrieve from distant backups into alternate AWS Regions, ideally successful Europe. Further, we powerfully counsel customers to update their applications to ingest S3 information to an alternate AWS Region. We volition supply an update by 11:00 AM PST connected March 2, oregon sooner if we person further accusation to share.Mar 02 2:53 AM PST We wanted to supply much accusation connected Amazon S3 fixed that determination are 2 impaired Availability Zones (mec1-az2 and mec1-az3) successful the ME-CENTRAL-1 Region. Amazon S3 is simply a determination work and designed to withstand the full nonaccomplishment of a azygous Availability Zone portion maintaining S3's durability and availability. When the mec1-az2 AZ was powered disconnected astatine astir 4:00 AM PST connected Sunday, March 1, S3 continued to run normally.
As the 2nd AZ became impaired, S3 mistake rates increased. With 2 Availability Zones importantly impacted, customers are seeing precocious nonaccomplishment rates for information ingest and egress. We powerfully counsel customers to update their applications to ingest S3 information to an alternate AWS Region. As soon arsenic practically possible, we volition statesman the restoration of our 2 Availability Zones which volition see a cautious appraisal of information wellness and immoderate repair of retention if necessary.In addition, we tin corroborate that the AWS Management Console and bid enactment interface (CLI) are disrupted by the nonaccomplishment of 2 Availability Zones. We proceed to enactment towards betterment crossed each services, and we volition supply an update by 6:00 AM PST connected March 2, oregon sooner if we person further accusation to share.Mar 02 12:52 AM PST: We proceed to enactment connected a localized powerfulness contented affecting aggregate Availability Zones successful the ME-CENTRAL-1 Region (mec1-az2 and mec1-az3).
Customers are experiencing accrued EC2 API errors and lawsuit motorboat failures crossed the region, and it is not presently imaginable to motorboat caller instances; existing instances successful mec1-az1 should not beryllium affected. Amazon DynamoDB and Amazon S3 are besides experiencing important mistake rates and elevated latencies.
We are actively moving to reconstruct powerfulness and connectivity, aft which we volition statesman betterment of affected resources; afloat betterment is inactive expected to beryllium galore hours away.
We urge that affected customers failover, and backup immoderate captious data, to different AWS Region. We volition supply an update by 2:00 AM PST, oregon sooner if the concern changes.Mar 01 10:46 PM PST: We tin corroborate that a localized powerfulness contented has affected different Availability Zone successful the ME-CENTRAL-1 Region (mec1-az3). Customers are besides experiencing accrued EC2 APIs and lawsuit motorboat errors for the remaining portion (mec1-az1).
At this constituent it is not imaginable to motorboat caller instances successful the region, though existing instances should not beryllium affected successful mec1-az1. Other AWS Services, specified arsenic DynamoDB and S3 are besides experiencing important mistake rates and latencies.
We are actively moving to reconstruct powerfulness and connectivity, astatine which clip we volition statesman to enactment to retrieve affected resources. As of this time, we expect betterment is aggregate hours away.
For customers that can, we urge failing distant to different AWS Region astatine this time. We volition supply an update by 12:00 AM PST, oregon sooner if we person further accusation to share.Mar 01 9:59 PM PST: We are investigating further connectivity issues and mistake rates successful the ME-CENTRAL-1 Region.Mar 01 6:01 PM PST We corroborate the betterment of the AssociateAddress API requests. We person besides applied a alteration that enables customers to disassociate Elastic IP addresses from resources that are impacted by the underlying powerfulness issue.
With these mitigations, customers tin present successfully make and subordinate caller web addresses successful the unaffected AZs arsenic good arsenic re-associate Elastic IPs from resources successful the affected portion to resources successful the unaffected zones.
We inactive bash not person an ETA for powerfulness restoration astatine this time. For customers that can, we urge utilizing alternate Availability Zones oregon different AWS Regions wherever applicable. We volition supply different update by 10:00 PM, oregon sooner if we person further accusation to share.Mar 01 4:26 PM PST: We are seeing important signs of betterment for AssociateAddress requests, and proceed to enactment toward afloat mitigating this issue. This combined with the earlier betterment of the AllocateAddress API means customers tin present successfully make and subordinate caller web addresses successful the unaffected AZs. Other AWS Services are besides present observing sustained betterment arsenic a effect of the EC2 Networking APIs recovery.
We are present focusing connected implementing a alteration that volition let customers to Disassociate Elastic IP addresses from resources that are impacted by the underlying powerfulness issue. We expect this circumstantial mitigation to instrumentality different hr to complete. We bash not person an ETA for powerfulness restoration astatine this time. For customers that can, we urge utilizing alternate Availability Zones oregon different AWS Regions wherever applicable. We volition supply different update by 6:30 PM, oregon sooner if we person further accusation to share.Mar 01 2:28 PM PST: We are seeing affirmative signs of betterment for galore of the EC2 APIs, specified arsenic Describes and AllocateAddress. We admit that customers are inactive experiencing errors erstwhile attempting to telephone the AssociateAddress API, and are incapable to disassociate addresses from resources that are affected by the underlying powerfulness issue. We proceed to enactment connected aggregate parallel paths to mitigate some of these issues.
We urge continuing to retry requests wherever possible.
We expect our existent mitigation efforts for these circumstantial issues to implicit wrong the the 2 to 3 hours. As we advancement with these mitigation efforts, customers volition observe higher occurrence rates for these operations. Additionally, we are investigating ways to velocity up these circumstantial mitigation efforts, but are ensuring we bash truthful safely. As of this time, powerfulness restoration is inactive respective hours away.
We volition supply different update by 5:30 PM PST, oregon sooner if we person further accusation to share.Mar 01 12:14 PM PST: We are alert that immoderate customers are experiencing errors erstwhile calling EC2 APIs, specifically networking related APIs (AllocateAddress, AssociateAddress, DescribeRouteTable, DescribeNetworkInterfaces). We are actively moving connected aggregate paths to mitigate these issues. For customers experiencing throttling errors connected the AllocateAddress APIs, we urge retrying immoderate failed API requests.
We are deploying a configuration alteration to mitigate the AssociateAddress API errors and expect betterment successful the adjacent fewer hours.
DescribeRouteTable and DescribeNetworkInterfaces API calls without specifying zone, Interface oregon Instance IDs are expected to neglect until we reconstruct the impacted zone. We urge customers to walk these IDs explicitly successful these API requests. For customers that can, we urge considering utilizing alternate AWS Regions.
We volition supply different update by 3:30 PM PST, oregon sooner if we person much to share.Mar 01 9:41 AM PST: We privation to supply immoderate further accusation connected the powerfulness contented successful a azygous Availability Zone successful the ME-CENTRAL-1 Region. At astir 4:30 AM PST, 1 of our Availability Zones (mec1-az2) was impacted by objects that struck the information center, creating sparks and fire. The occurrence section unopen disconnected powerfulness to the installation and generators arsenic they worked to enactment retired the fire.
We are inactive awaiting support to crook the powerfulness backmost on, and erstwhile we have, we volition guarantee we reconstruct powerfulness and connectivity safely.
It volition instrumentality respective hours to reconstruct connectivity to the impacted AZ. The different AZs successful the portion are functioning normally. Customers who were moving their applications redundantly crossed the AZs are not impacted by this event. EC2 Instance launches volition proceed to beryllium impaired successful the impacted AZ.
We urge that customers proceed to retry immoderate failed API requests. If contiguous betterment of an affected assets (EC2 Instance, EBS Volume, RDS DB Instance, etc.) is required, we urge restoring from your astir caller backup, by launching replacement resources successful 1 of the unaffected zones, oregon an alternate AWS Region. We volition supply an update by 12:30 PM PST, oregon sooner if we person further accusation to share.Mar 01 8:59 AM PST: We proceed to enactment toward restoring powerfulness successful the affected Availability Zone successful the ME-CENTRAL-1 Region (mec1-az2). In parallel, we are actively moving connected improving mistake rates and latencies that immoderate customers are observing for EC2 Networking and EC2 Describe APIs. Due to accrued request successful the unaffected Availability Zones, customers whitethorn acquisition longer than accustomed provisioning times oregon whitethorn request to retry requests for definite lawsuit types, oregon prime an alternate lawsuit type.
We volition supply an update by 10:30 AM PST, oregon sooner if we person further accusation to share.Mar 01 7:09 AM PST: We wanted to supply immoderate further accusation connected the isolated powerfulness issue. At this time, astir AWS Services person weighted distant from the affected Availability Zone (mec1-az2) and are seeing betterment for their affected operations and workflows. For EC2 Instances, EBS Volumes, and different resources that are impacted successful the affected Zone, we volition person a longer process of recovery.
At this time, powerfulness has not yet been restored to the affected AZ.
For now, we urge continuing to retry immoderate failed API requests. If contiguous betterment is required, we urge customers reconstruct from EBS Snapshots and/or regenerate affected resources by launching replacement resources successful 1 of the unaffected zones, oregon an alternate region. As of this time, betterment is inactive respective hours away. We volition supply an update by 8:30 AM PST, oregon sooner if we person further accusation to share.Mar 01 6:09 AM PST: We tin corroborate that a localized powerfulness contented has affected a azygous Availability Zone successful the ME-CENTRAL-1 Region (mec1-az2). EC2 Instances, DB Instances, EBS Volumes, and others resources are presently unavailable and volition acquisition connectivity issues astatine this time. Other AWS Services are besides experiencing mistake rates and latencies for immoderate workflows. We person weighed distant postulation for astir services astatine this time.
We urge customers utilize 1 of the different Availability Zones successful the ME-CENTRAL-1 Region astatine this time, arsenic existing instances successful different AZ's stay unaffected by this issue. We are actively moving to reconstruct powerfulness and connectivity, astatine which clip we volition statesman to enactment to retrieve affected resources. As of this time, we expect betterment is aggregate hours away. We volition supply an update by 7:15 AM PST, oregon sooner if we person further accusation to share.Mar 01 5:19 AM PST: We are investigating connectivity and powerfulness issues affecting APIs and instances successful a azygous Availability Zone (mec1-az2) successful the ME-CENTRAL-1 Region owed to a localized powerfulness issue. Existing instances successful this portion volition besides beryllium affected. Other AWS Services whitethorn besides beryllium experiencing accrued errors and latencies for their workflows, and we are moving to way requests distant from this affected Availability Zone. We urge customers marque usage of different Availability Zones astatine this time.
Targeting caller launches utilizing RunInstances successful the remaining AZs should succeed. Existing instances successful the different AZs are not affected.Mar 01 4:51 AM PST: We are investigating issues with AWS services successful the ME-CENTRAL-1 Region.
