I was watching the video, Computing at Scale: Challenges & Opportunities, a panel at Google Faculty Summit. Here are few interesting points made.
They observe few trends/problems (ones caught my ears, not comprehensive)
- We are drowning in data - Data Intensive Computing, How to handle lot of Data (e.g. Telescope could generate 200GB/sec).
- Data Driven approach is becoming popular
- How to Program large scale systems? Patterns, Middleware and teaching students to programme using them?
- Storage and Computing power is becoming Cheaper, and they are going to be placed remotely.
- Need for multidisciplinary collaborations to solve problems (e.g. e-science problems)
- With cloud cost of 1000 cpus per day = 1 cpu for 1000 days - Prof Patterson's observation
- In large scale systems, no matter high reliable, H/W fails, and S/W has to handle it - observation at Google
- Animoto (Company running on EC2), was using about 50 nodes, but due to a face book app they had to handle 10X user base with in a week, and they were able to bump up their system to 3500 nodes using EC2. See here for details.