Posts Tagged ‘Skynet’

The Skynet is upon us and it has chosen Amazon as its prime target. (For non-Terminator fans, Skynet is the main antagonist in the Terminator franchise — an artificially intelligent system which became self-aware and revolted against its creators. April 21st was the date when Skynet began its attack on the human race.) Proving to be true this time, Skynet struck in the guise of massive outages at Amazon Web Services‘ N. Virginia Data Center on April 21st, triggering immediate concerns over security of data on the cloud. Within hours, large number of people were already quoting this incident as an example for why you shouldn’t go for a cloud. Hundreds of popular websites including Foursquare, Change.org, fotopedia and Wattpad went down because of multiple failures at Amazon’s N. Virginia Data Center. The services were fully restored only on the 24th of April, making it the longest and most infamous disasters in the short history of cloud computing.

For starters, Amazon’s image was completely aback after this incident. To make matters worse, Amazon refrained from making any statements about the incident even when the whole world was trying to understand what exactly happened.

While, there is no official word yet, according to certain speculations it could be the Chinese hackers (and probably state sponsored) who brought the servers down. In a letter sent to its members, Change.org announced that the site has been a target to cyber attacks coming from China, probably ordered by the Chinese government. Amazon Web Services hosted Change.org now reported,    “Change.org is currently experiencing intermittent downtime due to a denial of service attack from China on our web site. It appears the attack is in response to a Change.org petition signed by nearly 100,000 people worldwide, who are standing against the detention of Chinese artist and activist Ai WeiWei. Despite this attack on our members and our platform, we will continue to stand with the supporters of Ai Weiwei to defend free speech and the freedom to organize for people everywhere.”

There is thus a high likely hood that an attack trying to pull down change.org actually brought down the entire N. Virginia data center of Amazon.

Now, before you decide against Cloud as an option to store data, consider this:

While Amazon was constantly trying to update customers on how they can remirror their data or different locations (thought the tips didn’t work in many cases), most customers had more or less recovered from the downtime in 2 days time. Though even two days sound quite a lot, the question is, how many small enterprises have the ability to sustain an attack of this magnitude, which was possibly a state sponsored attack. Those who are criticizing cloud for this disaster, would they have been able to recover in 2-3 days time had this attack happened directly on their infrastructure instead of Amazon Cloud? Was it possible for change.org to partially recover from the attack within hours of impact? If you do some analysis, you’ll get your answer.

How many organisations are ISO 27001 certified? How many organisations have been certified with multiple third party auditors on innumerous parameters? If you analyse all this, you’ll feel that hundreds of organisations that were impacted with the attack didn’t really make a poor decision to go with the cloud. In fact, they were the wisest of the lot. The only flip side of the equation is that they were impacted due to collateral damage as the attacks weren’t targeted at them!

The problem doesn’t lie with the cloud but with how you manage it. Here is a example on how data needs to be managed over the cloud and how you can prevent such disasters from impacting you.

After a long lazy weekend that was practically wasted because of an attack on Sony PlayStation Network, bringing down PSN to a complete halt, gamers woke up Monday morning with a ray of hope that Sony for sure would’ve fixed PSN by now. To their dismay, PSN is still down, even after 5 days of outage and there is more bad news.

According to a source with close connections to Sony Computer Entertainment Europe, the attack to the PlayStation Network may be a bit deeper than originally reported by Sony. According to the source, who wishes to remain anonymous, the PSN sustained a LOIC attack (which created a denial-of-service attack) that damaged the server. There was also a concentrated attack on the PlayStation servers holding account information. In addition, “Admin Dev accounts were breached.”

This lead to the result of “Sony then shut down the PSN and [is] currently in the process of restoring backups to new servers with new admin dev accounts.” The SCEE (Sony Computer Entertainment Europe) source said Japanese servers may be restored tomorrow while the U.S. and E.U. servers will likely be operational the following day.

Sony Computer Entertainment America recently confirmed that it pulled down the PSN because of an “external intrusion.” The Playstation Network and Qriocity services were pulled offline by Sony on Wednesday, April 20. Initially, hackvist group Anonymous was suspected for the attacks but the group later denied any such rumors.

Be it Anonymous or someone else, Sony should’ve been prepared for Skynet on April 21st (well considering the outage took place on the 20th of April, Sony perhaps should get some less abuses). However, if Sony can’t afford to build a contingency plan with billions of dollars in its pockets and thousands of bright minds behind it, then perhaps it truly deserves this Apocalypse. The only sad part is that tons of Gamers now have to suffer because of Sony’s inability to secure its infrastructure and have crisis management in place.

Amazon AWS, the popular cloud-based EC2 web hosting service is experiencing technical issues that have taken down the websites of several organisations including social media companies like Reddit, Foursquare and Quora.

Amazon Web Services is seeing connectivity, latency and errors, which have continued for nearly eight hours and the problem seems to be getting worse. However, according to Amazon, only the North American data centers are facing issues at the moment. Apart from the above mentioned companies, others impacted include:

Discovr, an iPad music app, which was reported that it went down, but shortly reported its service was restored.

Wildfire, a social media app, reports that it is down.

Livefyre is down.

Here’s an interesting one. CampgroundManager.com, apparently a software-as-service application used to manage campgrounds, say it is down.

A service called Totango, which appears to do something with managing customer relations and subscriptions, had some issues, but moved some things around, and got things mostly working again.

ESchedule, a Canada-based employee scheduling service, reports its service is down.

ZeHosting, a Web host, says it is experiencing slowdowns.

Recorded Future, which bills itself as a “temporal analytics engine” is reporting an outage.

PercentMobile, a mobile analytics firm, say its service is down.

The Cydia Store, which hosts applications available for jailbroken iPhones, reports it is down.

Here’s the latest from the EC2 team:

1:41 AM PT We are currently investigating latency and error rates with EBS volumes and connectivity issues reaching EC2 instances in the US-EAST-1 region.

2:18 AM PT We can confirm connectivity errors impacting EC2 instances and increased latencies impacting EBS volumes in multiple availability zones in the US-EAST-1 region. Increased error rates are affecting EBS CreateVolume API calls. We continue to work towards resolution.

2:49 AM PT We are continuing to see connectivity errors impacting EC2 instances, increased latencies impacting EBS volumes in multiple availability zones in the US-EAST-1 region, and increased error rates affecting EBS CreateVolume API calls. We are also experiencing delayed launches for EBS backed EC2 instances in affected availability zones in the US-EAST-1 region. We continue to work towards resolution.

3:20 AM PT Delayed EC2 instance launches and EBS API error rates are recovering. We’re continuing to work towards full resolution.

4:09 AM PT EBS volume latency and API errors have recovered in one of the two impacted Availability Zones in US-EAST-1. We are continuing to work to resolve the issues in the second impacted Availability Zone. The errors, which started at 12:55AM PDT, began recovering at 2:55am PDT

5:02 AM PT Latency has recovered for a portion of the impacted EBS volumes. We are continuing to work to resolve the remaining issues with EBS volume latency and error rates in a single Availability Zone.