Ronnie05's Blog

Taking the Internet Down – Factors and Ease of Effort

Posted in Internet and Search by Manas Ganguly on December 2, 2012

Can the Internet really be taken down? The recent blackouts at Syria and Egypt beg this question to be answered. Can Internet be engineered to fail abruptly and completely in certain regions of this world?

The key to the Internet’s survival is the Internet’s decentralization — and it’s not uniform across the world.   In some countries, international access to data and telecommunications services is heavily regulated.  There may be only one or two entities who hold official licenses to carry voice and Internet traffic to and from the outside world, and they are required by law to mediate access for everyone else. Under those circumstances, it’s almost trivial for a government to issue an order that would take down the Internet.On the flip side, this level of centralization also makes it much harder for the government to defend the nation’s Internet infrastructure against a determined opponent, who knows they can do a lot of damage by hitting just a few targets.

With good reason, most countries have gradually moved towards more diversity in their Internet infrastructure over the last decade. Sometimes that happens all by itself, as a side effect of economic growth and market forces, as many different companies move into the market and compete to provide the cheapest international Internet access to the citizenry. Even then, though, there’s often a government regulator standing by, allowing (or better yet, encouraging) the formation of a diverse web of direct connections to international providers.

How easy is it to disconnect the Internet- A World map.

How easy is it to disconnect the Internet- A World map.

Renesys – Internet Monitoring and Intelligence agency, lists out the risks of Internet black outs by countries and by the number f Internet gateways to the country.

  • 1 or 2 companies at your international frontier-  Classified under severe risk of Internet disconnection.   Those 61 countries include places like Syria, Tunisia, Algeria, Turkmenistan, Libya, Ethiopia, Uzbekistan, Myanmar, and Yemen.
  • Fewer than 10 service providers- Probably exposed to some significant risk of Internet disconnection.    Ten providers also seems to be the threshold below which one finds significant additional risks from infrastructure sharing — there may be a single cable, or a single physical-layer provider who actually owns most of the infrastructure on which the various providers offer their services.  In this category, there are 72 countries, including Oman, Benin, Botswana, Rwanda, Pakistan, Kyrgyzstan, Uganda, Armenia, and Iran.   Disconnection wouldn’t be trivial, but it wouldn’t be all that difficult.   Egypt falls into this category as well; it took the Mubarak government several days to hunt down and kill the last connections, but in the end, the blackout succeeded.
  • More than 10 internationally-connected service providers, but fewer than about 40, the risk of disconnection is fairly low.  Given a determined effort, it’s plausible that the Internet could be shut down over a period of days or weeks, but it would be hard to implement and even harder to maintain that state of blackout.     There are 58 countries in this situation, ranging from Bahrain (at the small end) to Mexico (at the largest end).   India, Israel, Ecuador, Chile, Vietnam, and (perhaps surprisingly) China are all in this category.So is Afghanistan, reminding us that sometimes national Internet diversity is the product of regional fragmentation and severe technical challenges.  It’s true; the government in Kabul is powerless to turn off the national Internet, because it’s built out of diverse service from various satellite providers, as well as Uzbek, Iranian, and Pakistani terrestrial transit.
  • More than 40 providers-  Extremely resistant to Internet disconnection.   There are just too many paths into and out of the country, too many independent providers who would have to be coerced or damaged, to make a rapid countrywide shutdown plausible to execute.   A government might significantly impair Internet connectivity by shutting down large providers, but there would still be a deep pool of persistent paths to the global Internet.    In this category are the big Internet economies: Canada, the USA, the Netherlands, etc., about 32 countries in all.

In many other cases, Physical pathways would be a limiting factor, even with multiple providers.  They are all sharing the very few long-haul fiber paths to/from a country which if taken out could lead to the black outs.

(This post is Inspired by a Renesys blog and quotes Renesys figures on the black out probabilities in nations)

Tagged with: ,

What could be India’s next Killer app on Internet?

Posted in e-commerce, Internet and Search by Manas Ganguly on November 19, 2012

Internet was introduced to India in 1998 and over the last 14 years, Internet penetration has increased to 121 million (10.2% of India’s Population). India already is the 3rd largest country in terms of numbers of Internet users globally.

Internet Growth in India

The convenience of train travel booking through the IRCTC website is seen as a significant catalyst to the growth of Internet penetration across India (Smaller towns, Rural areas included) and the grand old-dad of e-commerce in India. India, which is a conservative society in terms of Credit card spends, has never shied away from spending for Railway tickets online. Thus IRCTC is in effect India’s first killer Internet App.

IRCTC – Number of tickets sold & revenue (2003-12)

For a geography and an economy such as India, Internet would in future, be the tool for deliverance of education, healthcare, governance, information and banking to the masses. There is enough and more focus and push from the government to increase internet penetration in the
country. So what then could be India’s next Killer Internet App? The basis of “killer app” is increasing the Internet penetration. This is how it could really start working-

1. Vernacular Internet – With the Indic scripts and other language technologies coming to the maturity threshold, Internet in India would follow a vernacular path to massification.

2. UID aided- The UID would be key to provide an identity to every single individual in the country which is stored centrally and accessed by different services (Banking, Loans, education, professional, travel etc). Adhar which means basis in Hindi would be the fundamental construct to empowering people of India

3. Financial Inclusion – The government currently is working on a Direct Cash Transfer Scheme (DCTS) as a part of the MGNREGS ( Mahatma Gandhi National Rural Employment Guarantee Scheme) – financial inclusion which would reach the population at the bottommost layers of the social and economic hierarchy. Alongside there would be others such as Payments, Banking, Money transfer etc.

Aadhar II

To me, these three aspects (Vernacular medium, Aadhar UID project and DCTS under MNREGS) coming together would form India’s next Killer App – which will enhance penetration across social levels, geographics and economical classes opf people. The Internet was the information super highway when it was designed. Connecting people to this super highway would be India’s next killer Internet App.

Tagged with: , ,

Rise of India’s Digital Consumer (Part I)

Posted in e-commerce by Manas Ganguly on August 27, 2012

One of my earlier post deals with the e-commerce industry in India at this point of time. This series of posts will examine  the rise of India’s digital consumer.    

There are 124 million Internet users in India today, a growth of 41% Y-o-Y, out of which, 20 million users are through smartphones and tablet computers. Acccording to eBay, this number is expected to grow  100% over the next one year with the number of such devices growing everyday. comScore also reports India to be the fastest growing online market amongst the BRIC countries and India’s explosive online growth story will continue because, most online categories in India currently show below average penetration compared to global averages. With 124 million internet users, India is at a 10% level internet penetration. In correspondence to the rapid growth of Internet in India, Forrester estimates eCommerce revenue in India to increase from $1.6 billion in 2012 to $8.8 billion by 2016 accelerated by the increasing penetration of internet on mobile and social media.

Now heres the dope- for a country which is supposed to be technology phobic or plainly donot have access to technology because of the economics, Over 94% of the evolved internet shoppers surf internet, 87%  of the users compare product prices online and 68% of them have made online purchase using their smartphones and mobile devices.

As Internet penetration and the smartphones and tablets price accessibility increases – this will lead to increase in mobile commerce (mCommerce) volumes in India. Online purchasing through
mobile phones is catching up fast in non-urban and rural areas and the ratio between rural and urban buyers would be 1:10 right now but it may go up to 6:10 over next two years. Consumer Internet shopping habit is now forming quickly with most of these users using their mobiles as a window to transact ‘anytime and anywhere’.

To be continued.

Big Data: Controlling the beast by its horns

Posted in Mobile Data & Traffic by Manas Ganguly on February 20, 2012

(This is the third of series of posts on Big data and the Internet of Things. Read the first, second and third posts here.)

I look for hot spots in the data, an outbreak of activity that I need to understand. It’s something you can only do with Big Data.” – Jon Kleinberg, a professor at Cornell

Researchers have found a spike in Google search requests for terms like “flu symptoms” and “flu treatments” a couple of weeks before there is an increase in flu patients coming to hospital emergency rooms in a region (and emergency room reports usually lag behind visits by two weeks or so).Global Pulse, a new initiative by the United Nations, wants to leverage Big Data for global development. The group will conduct so-called sentiment analysis of messages in social networks and text messages – using natural-language deciphering software – to help predict job losses, spending reductions or disease outbreaks in a given region. The goal is to use digital early-warning signals to guide assistance programs in advance to, for example, prevent a region from slipping back into poverty.

In economic forecasting, research has shown that trends in increasing or decreasing volumes of housing-related search queries in Google are a more accurate predictor of house sales in the next quarter than the forecasts of real estate economists.

Big Data has its perils, to be sure. With huge data sets and fine-grained measurement, statisticians and computer scientists note, there is increased risk of “false discoveries.” The trouble with seeking a meaningful needle in massive haystacks of data, is that “many bits of straw look like needles.”
Data is tamed and understood using computer and mathematical models. These models, like metaphors in literature, are explanatory simplifications. They are useful for understanding, but they have their limits. A model might spot a correlation and draw a statistical inference that is unfair or discriminatory, based on online searches, affecting the products, bank loans and health insurance a person is offered, privacy advocates warn.

Despite the caveats, there seems to be no turning back. Data is in the driver’s seat. It’s there, it’s useful and it’s valuable, even hip. It’s a revolution. We’re really just getting under way. But the march of quantification, made possible by enormous new sources of data, will sweep through academia, business and government. There is no area that is going to be untouched.

Channelizing and Structuring Big Data: Data First Thinking

Posted in Mobile Data & Traffic by Manas Ganguly on February 16, 2012

(This is the third of series of posts on Big data and the Internet of Things. Read the first and second posts here.)

There is plenty of anecdotal evidence of the payoff from data-first thinking. The best-known is still “Moneyball,” the 2003 book by Michael Lewis, chronicling how the low-budget Oakland A’s massaged data and arcane baseball statistics to spot undervalued players. Heavy data analysis had become standard not only in baseball but also in other sports, including English soccer, well before last year’s movie version of “Moneyball,” starring Brad Pitt.

Artificial-intelligence technologies can be applied in many fields. For example, Google’s search and ad business and its experimental robot cars, have navigated thousands of miles of California roads, both use a bundle of artificial-intelligence tricks. Both are daunting Big Data challenges, parsing vast quantities of data and making decisions instantaneously.

The wealth of new data, in turn, accelerates advances in computing – a virtuous circle of Big Data. Machine-learning algorithms, for example, learn on data, and the more data, the more the machines learn. Take Siri, the talking, question-answering application in iPhones, which Apple introduced last fall. Its origins go back to a Pentagon research project that was then spun off as a Silicon Valley start-up. Apple bought Siri in 2010, and kept feeding it more data. Now, with people supplying millions of questions, Siri is becoming an increasingly adept personal assistant, offering reminders, weather reports, restaurant suggestions and answers to an expanding universe of questions.

Google searches, Facebook posts and Twitter messages, for example, make it possible to measure behavior and sentiment in fine detail and as it happens. In business, economics and other fields, decisions will increasingly be based on data and analysis rather than on experience and intuition.

Retailers, like Walmart and Kohl’s, analyze sales, pricing and economic, demographic and weather data to tailor product selections at particular stores and determine the timing of price markdowns. Shipping companies, like U.P.S., mine data on truck delivery times and traffic patterns to fine-tune routing. Police departments across the country, led by New York’s, use computerized mapping and analysis of variables like historical arrest patterns, paydays, sporting events, rainfall and holidays to try to predict likely crime “hot spots” and deploy officers there in advance. Data-driven decision making” achieved productivity gains that were 5 percent to 6 percent higher than other factors could explain.

Big Data and the Internet of Things.

Posted in Mobile Data & Traffic by Manas Ganguly on February 15, 2012

(This is the second of series of posts on Big data and the Internet of Things. Read the first post here.)

With a 18 fold increase expected in the next 5 years timeframe Data is the new class of economic asset, like currency or gold.With growing multiplicity of data sources, Big Data has the potential to be “humanity’s dashboard,” an intelligent tool that can help combat poverty, crime and pollution. Privacy advocates take a dim view, warning that Big Data is Big Brother, in corporate clothing.

What is Big Data? A meme and a marketing term, for sure, but also shorthand for advancing trends in technology that open the door to a new approach to understanding the world and making decisions. There is a lot more data, all the time, growing at 50 percent a year, or more than doubling every two years, estimates IDC. It’s not just more streams of data, but entirely new ones. For example, there are now countless digital sensors worldwide in industrial equipment, automobiles, electrical meters and shipping crates. They can measure and communicate location, movement, vibration, temperature, humidity, even chemical changes in the air.

Linking these communicating sensors to computing intelligence and gives rise to what is called the Internet of Things or the Industrial Internet. Improved access to information is also fueling the Big Data trend. For example, government data – employment figures and other information – has been steadily migrating onto the Web. In 2009, Washington opened the data doors further by starting Data.gov, a Web site that makes all kinds of government data accessible to the public.

Data is not only becoming more available but also more understandable to computers. Most of the Big Data surge is data in the wild – unruly stuff like words, images and video on the Web and those streams of sensor data. It is called unstructured data and is not typically grist for traditional databases. But the computer tools for gleaning knowledge and insights from the Internet era’s vast trove of unstructured data are fast gaining ground. At the forefront are the rapidly advancing techniques of artificial intelligence like natural-language processing, pattern recognition and machine learning.

App Usage powering Mobile Internet growth

Posted in Mobile Computing by Manas Ganguly on January 9, 2012

The era of mobile computing, catalyzed by Apple and Google, is driving among the largest shifts in consumer behavior over the last forty years. Impressively, its rate of adoption is outpacing both the PC revolution of the 1980s and the Internet Boom of the 1990s. Since 2007, more than 500 million iOS and Android smartphones and tablets have been activated. By the end of 2012, Flurry estimates that the cumulative number of iOS and Android devices activated will surge past 1 billion. According to IDC, over 800 million PCs were sold between 1981 and 2000, making the rate of iOS and Android smart device adoption more than four times faster than that of personal computers. While the Internet began its commercial ramp in 1996, iOS and Android devices have seen double the number of device activations during its first five years compared to the number of Internet users reached during its first five years (Internet 1996 – 2001 vs. Smart devices 2007 – 2012).

On top of this massively growing iOS and Android device installed base, roughly 40 billion applications have already been downloaded from the App Store and Android Market. The average smartphone user, is beginning to spend more time in mobile applications than they do browsing the web.

This chart by Flurry compares how daily interactive consumption has changed over the last 18 months between the web (both desktop and mobile web) and mobile native apps. Ever since June 2011, time spent in mobile applications has grown. Smartphone and tablet users now spend over an hour and half of their day using applications. Meanwhile, average time spent on the web has shrunk, from 74 minutes to 72 minutes. Users seem to be substituting websites for applications, which may be more convenient to access throughout the day. People are now spending less time on the traditional web than they did during an year ago. This drop appears to be driven largely by a decrease in time spent on Facebook from the traditional web. In June 2011, the average Facebook user spent over 33 minutes on average per day on the website. Now, that number is below 24 minutes. Time spent on the web without Facebook has grown at a modest rate of 2% between June 2011 and December 2011.

Even while, the growth in time spent in mobile applications is slowing – from above 23% between December 2010 and June 2011 this year to a little over 15% from June 2011 to December 2011. The growth is predominately being driven by an increase in the number of sessions, as opposed to longer session lengths. Consumers are using their apps more frequently.

Facebook is the most used app on Android among 14 – 44 year olds, surpassing usage of Google’s own native, pre-installed apps. Additionally, Facebook Messenger became the top downloaded app, at least one time during 2011, across more than 100 different App Store countries. In the U.S., the largest App Store market, Facebook Messenger ranked as the top overall app across all other apps across all categories.

Video and Content growth wave in the new Mobile economy

Posted in TV and Digital Entertainment, Value added services and applications by Manas Ganguly on November 10, 2011

A research report by Ericsson endeavours to put some numbers to global data traffic projections: Mobile broadband subscriptions are expected to reach almost 5 billion in 2016, up from the expected 900 million by the end of 2011.That would represent 60% CAGR. Total smartphone traffic is expected to triple during 2011 and increase 12 fold by 2016 (roughly equal to PC generated traffic). Growth in mobile data traffic between 2011 and 2016 is mainly expected to be driven by video. By 2016 more than 30 percent of the world’s population will live in metropolitan and urban areas with a density of more than 1,000 people per square kilometer. These areas represent less than 1 percent of the Earth’s total land area, yet they are set to generate around 60 percent of total mobile traffic. Overall, an increase in mobile broadband, new smartphones, and higher app consumption will all drive the push for more data and Smartphones alone will account for a huge part of that.

Compare this to the growth in Global consumer Internet traffic which is expected to grow 5X during 2009-14 (CISCO report). Though the time intervals for both these data points are not concurrent, it highlights the growth perspective in data/internet led networks. The same report puts the growth in Mobility based data at 39X between 2009 and 2014. Increasing Video traffic driven by live video and TV are expected to drive global consumer internet video consumption by a factor of 10X between 2009-14 (and to me that is massively understated). The growth in Internet video consumption will be prevalent across all categories of Video: Internet to PC (Long, Short and Live), Internet to TV and Ambient Video/Internet PVR.

Driven by Lifestyle requirements, Living situation and Employment status, consumption of Internet video content is accelerating at a smart pace even as Content and its discovery itself is becoming smarter. What could this mean for consumers and the service providers ?
For the consumer: They would seek for capabilities that enable them to easily and securely access content, applications and infrastructure they seek from any location or device.

For the service provider: It would mean infrastructure capabilities that are re-usable, expandable, scalable for quick time to market and better insight and control over consumer’s end to end experience. Smart content delivery networks constitute a $6bn-$15bn market for service providers by 2015. Massive internet video growth drives puts forth huge operating challenges but also very unique revenue and monetization opportunities. Content management will perhaps not be enough unless the service providers are clear on their consumer segmentation, segment focus and positioning strategies and how much money could be made on these services. Again since this sector is fairly nascent at this point of time, regulatory and anti-trust considerations could also be key influencers.

Google Panda: Algorithm versus Wisdom

Posted in Internet and Search by Manas Ganguly on November 3, 2011

Google’s search engine is a triumph of technology. There’s no denying that. It was the capstone that completed the initial structure of the Internet.

With over a decade and more of dominance beyond any thought and competition the biggest challenge for Google lately has been the declining potency of its search engine. In recent years, Google searches have become a lot less useful and a lot more frustrating. It has become more difficult to find stuff that is on the internet — even stuff that was featured previously. Another example is pages that have posted to the web more recently. They get overpowered in the Google algorithm by older pages that have had time to accumulate more incoming links.

But, the Internet is now in the midst of a dramatic remodel and it’s unclear whether Google search will get the refresh it needs to make it more appealing than ever or if it will be one of the things that gets painted over.
1.) The search results on Google are becoming increasingly ineffective because they were littered with “web spam” and articles from “content farms” (sites creating faux content to turn as many ads as possible).
2.) Social media has been replacing traditional web search for many different kinds of information gathering and Google hasn’t have a legitimate play in social (till Google+. After several high-profiled social flameouts — such as Google Wave and Google Buzz — Google+ seems to have made the mark)

Ironically, the biggest problem for Google is its greatest cash cow, SEO — Search Engine Optimization. A whole cottage industry has arisen around helping sites optimize their pages to get ranked as highly as possible in Google. As a result, the sites that land at the top of Google search results have become more about which sites are best optimized rather than which ones have the best and most relevant content.

Even worse, whole companies have emerged whose entire purpose is to create low-quality content that is highly-optimized for Google and loaded up with ads to turn a quick buck. These “content farms” have become big business.

Recognizing the growing risks points 1. and 2.pose to Google’s relationship with users, and ultimately its business model, the company has moved aggressively in 2011 to fix the situation. Google called it Panda. Panda was released in February 2011 with newer versions in April 2011, May 2011, June 2011 and September 2011.

However, the Panda update has had a difficult time targeting content farms and has accidentally affected a lot of good stuff. So for every eHow and Demand Media (bogus content sites) that Panda obliterated, Panda also killed some like TechRepublic (genuine content host).

Google argues that this creates a fairer and more objective system, and that introducing human filtering into the system would make it biased and subjective. While that may be true, the big question is whether human intervention would make Google search more effective, and ultimately more accurate. The problem with the algorithm (and artificial intelligence in general) is that it has no common sense or wisdom — at least not yet. Meanwhile, the systems that Google search is increasingly competing with for information discovery — social search and mobile apps — use the collective wisdom of the community or targeted experts to deliver better information more quickly than Google search, in many cases.

So far, Google (even with Panda) has had a difficult time targeting content farms and it has ended up accidentally removing a bunch of useful content in the process. The big question now is whether Google can learn from this experience and change, or if it will eventually fade into becoming a fallback mechanism that people use when they can’t find the information they need from social search (asking their Twitter or Facebook friends) or a mobile app.

Tagged with: , , ,

HTML5 – Future of the web (Losers offsetting losses) (Part III)

Posted in Internet and Search by Manas Ganguly on October 24, 2011

Read Part I and Part II here.

Apple has benefited from a similar monopoly, but on deployment. Capturing 30% of every application and piece of content sold to an iPhone or iPad user has become a multi-billion dollar business for the boys in Cupertino. With HTML5, an increasing amount of content, and eventually applications, will be able to circumvent the Apple bottleneck. The good news for Apple is that the advent of HTML5 may once and for all put their Achilles heel of not supporting Flash behind them. Apple has rushed to adopt HTML5 across its product line, and Steve Jobs was very direct and vocal that the combination of HTML5, CSS, and Javascript was far superior to Flash as far as Apple was concerned.

Apple’s rush to adopt HTML5 might seem to be at odds with what many financial analysts have described as the major threat HTML5 poses to Apple’s monopoly with the App Store. Apple has been tweaking its implementation of HTML5 in the Safari browser to limit some capabilities, like auto-play of audio and video, using customer satisfaction as the reason. Perhaps it’ll be able to continue to steer developers who want the ultimate experience on iPhones and iPads to continue to use the App Store, even if it’s just to sell wrapped versions of their HTML5 interfaces. In any case, Apple has certainly decided that it has more to gain from embracing the emerging HTML5 standard — growing the potential market for iPads and iPhones — and getting out of its morass with Flash, than it would by dragging its feet or proposing its own alternative. Complicating matters are some ongoing patent disputes between Apple and the W3C (World Wide Web Consortium) — which drives standards for the web.

If Adobe and Apple are right in their public assessment of the opportunities which HTML5 presents them, then Microsoft may be the biggest loser — although even desktop vendors will benefit in some ways, as trendy web applications will be able to run on their machines, instead of being limited to tablets. Of the big loosers, is the web monopoly notably Microsoft. HTML5’s platform independence hits Microsoft where it hurts the most: Desktops and Desktop Applications. Obviously Microsoft isn’t standing still, so whether their share of internet-connected devices continues to slip — from 95% to 50% in the last three years — is open to debate, but the dominance will clearly erode, a trend likely to be accelerated by HTML5′s device-independent promise.
Revamping the web with an improved set of content protocols might really benefit everyone.

Clearly, though, Microsoft, Apple, and Adobe have the most at risk, and could still turn out big losers on this one.

Tagged with: , , , , ,
%d bloggers like this: