Page Impressions Ltd Blogcetera

Wednesday, October 08, 2014

Big Data - the coming revolution

Big Data is rapidly becoming the latest big thing in computing. The issue for us all is that its impact will be far beyond the world of computing and will effect every aspect of our lives from retail to health and everything in between. Static databases are becoming dynamic sources of unimaginable insight. The amount of structured and unstructured data that is being produced is just phenomenal. Where once databases were being compiled by user input into structured forms that companies used to provide basic trend and financial information, now we are met with terabytes of unstructured data being accumulated and stored as a result of machine generated interactions whether its in every transaction through supermarket terminals to every image stored by the millions of CCTV cameras that have proliferated in our public and private spaces. What has changed is that the cost of storage has plummeted and combined with the infinite connectivity offered by the Internet, stored data has just snowballed.
With the advent of free database search tools developed by Google such as Hadoop and MapReduce it has become possible for machines to begin to analyse this huge warehouse of unstructured data. Hadoop is a free, Java-based programming framework that supports the processing of large data sets in a distributed computing environment and MapReduce is a programming model and an associated implementation for processing and generating large data sets with a parallel, distributed algorithm on a cluster. These tools enable almost anyone to begin to analyse their data for hidden insights into their business activity or the world around them. More importantly,Big Data has the potential to alter the economics of some of our most important industries.. A recent report by McKinsey suggested that if US health care could use big data creatively and effectively to drive efficiency and quality, the potential value from data in the sector could be more than $300 billion in value every year, two-thirds of which would be in the form of reducing national health care expenditures by about 8 percent. Furthermore they suggested that in the private sector, a retailer using big data to the full has the potential to increase its operating margin by more than 60 percent.  In the public sector, across the developed economies of Europe, government administration could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data.
These are figures that should make every CEO, Politician and citizen sit up and take notice.  Big Data has enormous power to change the lives of everyone it touches in ways we cannot begin to understand as yet.
In addition, whilst in the past the analysis and evaluation of data-sets was the preserve of the expert, that is also changing as computers increasingly take on the task of analysis and evaluation and learning.  Computers are not only evaluating the data, they are increasingly "learning" how to improve and and extend our understanding of what the data means.  For example, a computer was given the task of analyzing a vast database of cancer biopsy results and duly endeavored to identify twelve key traits that might suggest cancerous cells.  The issue was that only nine traits had been previously identified in the published medical literature.  The use of machine  evaluating Big Data had moved the science of medical diagnosis on significantly and in doing so potentially advanced our detection of cancer and improving survival rates.
So Big Data cannot be ignored by anyone and the trend is to enable access to such tools to a wider and wider audience enabling every business and public sector body the opportunity to benefit from this key aspect of the third industrial revolution.
Is there a downside? Well the primary impact in the medium term will be to render unemployed many of the professional classes once seen as having jobs for life.  Big Data has the potential to impact on highly skilled job roles which had relied upon experience and expertise built up over many years.  For example, taking the cancer biopsy analysis a stage further, computers will be able to calculate radiology treatment quicker and more accurately for treatment than a highly trained and experienced consultant radiologist.  On the other hand there has already identified an enormous of shortage of data analysts to drive this Big Data revolution.  One thing is clear, where industrial automation and IT capabilities eliminated manual labour and secretarial jobs in the 1980's and 1990's, it will be the highly skilled white collared jobs of consultants across a range of professions in medicine, banking, insurance and engineering which will become vulnerable to Big Data.
Equally there is no going back, just as with previous industrial revolutions the genie is out of the bottle and we need to adapt to take advantage of the opportunities offered by these developments.

Tuesday, February 11, 2014

Is BT Infinity worth the hassle?

Today BT installed BT's flagship product BT Infinity to our property.  I was looking forward to superfast broadband as promised by BT's adverts.  Unfortunately the service comes with a number of faults.
1. The range of the wireless router appears to be rather less than 4m.  This has resulted in the need to have the router balanced on a bookcase around the corner from the master socket in order to set up a "line of sight" connection to my internet enable TV.  Very poor.
2. The installing engineer from BT Openreach who installed the new master socket has disconnected all my home extensions.  This fault didn't become apparent until after he had left.  Despite numerous calls to BT Customer Service, they seem totally unable to stay on the line and have repeatedly dropped the connection.
When I finally did get through I was told that they could not schedule an engineer visit until next week! To cap it all Gnesh then dropped the line again.

So if I have any advice for potential customers of BT Infinity is go elsewhere.  BT are just not fit for purpose. I would not have "upgraded" had I known what a total fiasco BT would make of a simple installation.

Friday, December 13, 2013

Google to institute only pay if viewed!

Fraud has plagued the on-line advertising market since its initiation. Back in 2009, Click Forensics estimated that for advertisers and ad networks, 14.1% of the clicks on their ads were bogus, and costs them money.

All PPC (pay per click) ad providers are keen to combat click fraud and have sophisticated methods for doing so although it still represents a significant proportion of their income since it is the advertiser who will always end up paying.

 However, on Thursday (13th December 2013) Google announced that it was introducing a new system to deal with how ads are viewed and consequently charged for. “If you are an advertiser and a human being didn’t see your ad, then frankly nothing else matters,” said Neal Mohan, Google’s vice-president of display advertising products at Google. “If you are a marketer, why pay if a human being did not see the ad?”

The global on-line advertising industry is worth $117bn and it is estimated that as many as half of the digital ads that marketers buy are not seen at all, with a large portion only being viewed if a website user scrolls all the way down to the bottom of a web page. This issue of how effective advertising is has always been an issue for all forms of advertising be it TV of Bill Posters.

 Now Google intends to introduce an approach called Active View. Google's new Active View offering is based on an emerging industry standard called IAB/3MS, which states that an ad is only “viewable” if more than 50 per cent of it is visible on the screen for one second or longer. Advertisers will be able to see a report of how many viewable impressions they have received for any given campaign, and this data can be used to inform future campaigns. The Active View system ensures that if your ad is buried "below the fold" and doesn't get seen then you will not be billed and if your on-line ad has been seen for at least one second then you will be billed for that impression.

I have always been rather sceptical about the value of such viewed ads which do not have any "call to action" and certainly attempts to at least bill for a viewing by a human being is a step in the right direction, but it really doesn't make for very effective use of ones advertising dollar, pound or euro. Billing for ads that didn't get seen in the past was always a very dodgy practice and this approach should have been introduced a long time ago. Whilst this may not be click fraud, I do think it continues to call into question the billions spent on adverts which are just about brand awareness on-line and fail to trigger a genuine sales lead. Maybe advertisers and agencies placing the ads need to think rather more carefully about what they are attempting to achieve with their on-line campaigns.

This is a good start, but Google and the other major on-line players need to go much further to clean up this industry and ensure advertisers get the value for money they pay for.

Further reporting:-
Financial Times , The Daily Telegraph , BBC

Monday, December 09, 2013

Has the Mobile Internet finally emerged as a primary retail channel in 2013

Back in 2000, I worked with a former colleague from business school to raise investment for what we now refer to as a location based mobile application. The application was known as ZagMe enabled customers to send offers to users of the service in their location such as a shopping mall  (BBC Zagme News report). Unfortunately the technology platform was based on WAP which proved unreliable and costly to use. Today companies such as Foursquare have taken the concept to a mobile broadband customer base happy to use and act on location based offers. However, 2013 has seen the break through of the mobile device including tablets etc. as a primary channel for online purchasing. Retailers have rushed to adapt to the new format and even bring out their own optimised devices such as Tesco and Aldi launching bespoke low cost tablets. According to mobile experience management platform Artisan (www.useartisan.com ), 77% of consumers intend to make purchases through a mobile and/or tablet app this holiday season.

This statistic suggests that using mobile devices to purchase through sites such as Amazon, John Lewis etc. is more likely to be the norm rather than the exception.  In a recent survey, OFCOM forecast the number of mobile broadband subscribers to be in the region of 5.1 million.  With sales of Tablet computers of every type likely to be the most sought after gift this Christmas, that number is sure to grow dramatically and with it the opportunity for retailers to grow their online sales channel as well as location based sales by offering incentives to bring users into High Street stores.

Sadly all this furious activity in mobile sales comes to late for Zagme which succumbed to the dotcom bust when trying to raise a second round of finance.  Todays location based apps have a much brighter future!

UK ISP, Mobile Internet and Cable Subscriber Numbers - December 2013

Here is an update of the UK ISP market covering DSL and Cable Access market as well as the Mobile Dongle market in the UK.  I have also added in the published figures for 3G and 4G mobile access devices. I have used ITU published data for Broadband usage numbers and Neilson Ratings and ISP published figures to get an accurate picture as well all the reports and disclosures for each of the companies shown below. I believe these figures represent a reasonably accurate representation of the genuine adoption of broadband either via DSL, Cable or mobile dongle. Broadband connections included in this data cover download speeds equal to or faster than 256kbit/s.

Table 1. Broadband ISPs (Source: ITU, Neilsen, OFCOM & ISP Published Figures)

No.
ISP/Provider
User Numbers
% of UK Market (inc Mobile)
1
BT Retail
6,961,000
26.07%
2
Sky Broadband
5,017,000
18.79%
3
Virgin Media
4,488,600
16.81%
4
TalkTalk Group
4,076,000
15.27%
5
Orange
714,000
2.67%
6
Kingston Comms
178,200
0.67%
7
Zen Internet
94,000
0.32%
8
Vodafone UK
85,000
0.31%
9
Thus Group
80,000
0.30%
10
Entanet
70,000
0.26%

BT continues to grow its user base at the expense of the smaller players.  Sky Broadband has continued to grow, primarily by acquisition having taken over O2’s broadband user base.  However, although BT Sport is still in its early days, it is clearly impacting Sky’s organic growth.


The total UK Internet connectivity market is not only made up of Broadband ISPs, but connection is now frequently through 3G and a growing 4G user base. Consequently I have taken the total market as being made up of both the fixed-line broadband and mobile internet access as I think it would be erroneous to suggest that they are not part of the same competitive market. The Broadband ISP market size is estimated to be just under 22 million broadband connections. OFCOM estimates the mobile internet connectivity to be in the region of 5.0 million mobile broadband users in addition the 21.7 million fixed line users.  So the total market is estimated at 26.7million subscribers.  Of these super broadband users make up just 3.7million.

Sunday, September 29, 2013

Google Announces New Search Algorithm

Google has just announced their new search algorithm, codenamed Hummingbird.  It is the first major upgrade for three years and is a major step towards semantic web search.

Launched quietly about a month ago, the new algorithm affects about 90% of Google searches. The update is designed to provide more accurate results when faced with natural prose questions from web searchers according to senior vice president of search Amit Singhal.

Google stressed that a new algorithm is important as users expect more natural and conversational interactions with a search engine - for example, using their voice to speak requests into mobile phones, smart watches and other wearable technology.

Hummingbird is focused more on ranking information based on a more intelligent understanding of search requests, unlike its predecessor, Caffeine, which was targeted at better indexing of websites.
 “We just changed Google's engines mid-flight - again” Amit Singhal Senior VP, Google Search.

It is more capable of understanding concepts and the relationships between them rather than simply words, which leads to more fluid interactions. In that sense, it is an extension of Google's "Knowledge Graph" concept introduced last year aimed at making interactions more human.

In one example, shown at the presentation, a Google executive showed off a voice search through her mobile phone, asking for pictures of the Eiffel Tower. After the pictures appeared, she then asked how tall it was. After Google correctly spoke back the correct answer, she then asked "show me pictures of the construction" - at which point a list of images appeared.

SEO just the same – Content is king!

As regards developing successful SEO programmes, really very little has changed.  In order to be successful in SEO the key is still to create relevant and interesting content that delivers real value for their consumers. 

However, a subtle change in how the algorithm views the content is happening whereby the new Google’s ranking algorithm focuses on the context of where content appears against search queries rather than traditional keyword matching.  Hummingbird tries to match documents based on the underlying user intent.  

The key to successful SEO will thus require much better content editing and site writing to ensure that the content is answering a question not just stuffing as many keywords and phrases a text will take.  The semantic web is coming of age with a search engine to match.


Sunday, September 08, 2013

Apple on top, but Android's position is growing stonger!

Millennial Media, the Mobile ad platform, has released their latest Mobile Mix report, covering Q2 2013.  The report lists the top 20 mobile devices adding tablets into the mix along with smartphones and feature phones. Apple made devices dominate the rankings taking three of the top four places, however, the increasing importance of Android based machines is crucial. Samsung occupies the number two spot with its Galaxy S phones and gains some additional presence in the top ten thanks to the appearance of the Galaxy Tab and Galaxy Note in the top 10 list. HTC and Motorola lose some representation on the list, however and Amazon debuts with the Kindle Fire ranking number eight overall for impressions.  The fall of alternatives to iOS and Android make the market a two horse race.  Blackberry is clearly in trouble and Windows is failing to make any impression on the market.  Microsoft’s purchase of Nokia is unlikely to turn Windows prospects around.
Top 20 Devices - Ranked by Impressions


The fact that Google Play is now outpacing Apple App store for downloads suggest that the Android is increasingly eating Apple’s lunch.

Saturday, September 07, 2013

iOS Continues to dominate mobile market for mobile ads, but for how long?

Most recent reports suggest that Apple's iOS continues to dominate the mobile advertising market in terms of impressions and revenue generation delivering almost 44% of all ad impressions and almost 50% of revenue (Figure 1).  The mobile market is increasingly looking like a two horse race between Apple iOS and Android and whilst on the face of it Apple continues to dominate, the decline of any alternative operating system other than Android suggests that in the near term Android will soon pass iOS as it becomes the de-facto alternative operation system.

Traffic share (mobile phone OS)
OS Share
% of Traffic
% of Revenue
Android
31.24%
28.08%
Phone
30.58%
27.76%
Tablet
0.66%
0.32%
iOS
43.75%
49.36%
IPhone
30.88%
36.44%
iPad
8.04%
10.21%
iTouch
4.83%
2.71%
RIM
3.37%
5.41%
Symbian
5.16%
1.56%
Windows
0.26%
0.30%
Other
16.21%
15.27%
Source: Opera  mediaworks

The challenge presented by Android is strengthening as Google and its partners increase the rate of innovation.  Apple is clearly showing signs of innovation fatigue.  A recent report by Goldman Sachs pointed to several concerns, including “delayed product cycles, supply chain difficulties, product price erosion, and a slower pace of product innovation.” Apple’s ability to continue innovating at the breakneck pace it maintained over the past few years remains a major concern.  The recent performance of Google Play surpassing Apple's App Store downloads particularly in emerging markets underline the direction of travel.
This rebalancing of the market suggests that in the longer term the Android market is set to become the most important mobile advertising market.

Wednesday, February 20, 2013

4G licences only raise £2.34 billion in auction

It beggars belief that the UK Government has only managed to raise just £2.34 bn in licence fees for the the 4G licences.  It really does suggest that both the UK Government and Ofcom are totally disconnected either from the potential of the mobile internet market or they are totally incompetent at running an auction.  Whilst it was clear from the start that the market would not pay the astronomical figures paid for the original 3G licences of £21 billion raised by Gordon Brown,  the value of these licences should have achieved far beyond the Office of Budget Responsibilities own projection of £3.5bn, let alone the £2,34bn delivered.  Everything Everywhere, Hutchison 3G UK, Telefonica (O2), Vodafone (VOD) and BT (BT.A)'s Niche Spectrum Ventures secured the 4G licences.  It is a little disappointing that the winning bids all came from the usual suspects and that we will not see any new entrant to spice up the cosy market carve up between the major mobile networks and BT.


Given that the UK is one the largest mobile internet markets and combined with the superior quality of coverage and speed of performance offered by 4G’s 800 MHz network, the winning network providers and service providers stand to make significant revenue.  The mobile broadband should provide smartphone and tablet computer users with "superfast" download speeds, and will provide £20 billion of benefits for UK consumers over the next ten years, Ofcom said. 

However, when mobile operator EE, a joint venture between T-Mobile and Orange, became the first to launch a 4G service in October 2012 in a brief monopoly, it struggled to attract users. It was forced to cut its prices in January, lowering its entry price to £31 from £36 a month.

Despite this slow take up, I still believe that 4G has the power to become the de facto communications network for Internet access in the UK - a view clearly shared by BT - which is why they entered the auction to secure 4G capacity which they are using to extend their WiFi network.  4G has the capacity to be a game changer in technology terms and could change the local access in remote locations of 100 MHz Ethernet speed access reducing the need to take fibre to the home.

There may not be champagne corks popping in No 11 tonight, but I am sure they will be in the HQs of our major mobile providers.




Wednesday, February 06, 2013

The Appscape

Here are a number of interesting facts concerning the growth of mobile marketing and in particular the App market.

  • Apple Apps – 700,000 (Nov 2012)
  • Android Apps – 700,000 (Nov 2012)
  • Microsoft – 120,000 (Dec 2012)
  • 37mins the average time spent on apps per day
  • Mobile apps will grow from a $6 billion industry today to $55.7 billion industry by 2015 (Forrester)
  • The average Android smartphone user has downloaded 44 apps onto their phone
  • 53% of American cellphone users now have a smartphone
  • 38% of people who use social media on mobile devices cite general browsing as their main activity
The rapid growth of Android apps is very impressive, having caught up with Apple so quickly and will undoubtedly pass Apple in 2013.  The fact that Apple apply rigid "quality control" as to which apps make it on to the iPhone and Google pretty much set a minimum compliance approach cannot be the only reason Android has blossomed since Microsoft has failed to grow in quite the same way.  Apple need to decide whether they are going to carry denying other smartphone users the chance to use the Apple IOS and ultimately see the market they have owned slowly (or maybe not so slowly) eroded by Android as they did twenty years ago during the PC wars.


iPhone 5 Decline in face Samsung Challenge?


Sales of the iPhone 5 appear to be slowing dramatically in the UK and around the world. In what seems like a bid to drum up sales, for the first time I can recall, my mobile provider is making unsolicited calls to remind that I am due an upgrade and have the opportunity of getting my hands on an iPhone 5 as part of my package.  Previous it appeared I needed to be best friends with the chairman of the mobile operator to get such an offer.   Furthermore, according to the Wall Street Journal, Apple has cut an iPhone 5 display manufacturing order by half citing "weaker-than-expected demand." The display order, which was targeted for the January to March quarter, was cut along with other key component manufacturing in December.

Given the crowded marketplace that the smartphone arena has become, it is only good business that there should be changes to manufacturing orders.  However, this data seems to confirm that globally, Samsung is beginning to get the upper hand in sales of smartphones in head-on competition with Apple.  With 1.3 billion smartphones in use worldwide by end of 2012 and 465 million Android smartphones sold in 2012. Google’s operating system has now captured a 66% global market share.  Samsung has used the Android OS to drive sales with their latest smartphone the Galaxy SIII with shipments estimated to be 18 million units for the third quarter of 2012, while smartphone and overall mobile device shipments are projected at 59 million and 106 million units respectively.

Apple is rumoured to be accelerating the launch of the iPhone 6 to combat this growing Korean threat and thus it is of little surprise that they are ramping down iPhone 5 production to make way for a newer model.  Clearly being sued by Apple is good commercial business for Samsung since it has crystallized the belief that the once mighty Apple may have a worthy competitor in the form of Samsung.  Litigation can have unexpected consequences and certainly it seems it may be the case that Apples lawyers are better at marketing Samsung's products than helping their own side win.

Summary of the Latest Social Media Data.

I was recent required to pull together a presentation covering the impact of social media on search engine optimisation and as part of that exercise I extracted a collection of the latest data points on the three leading social media sites of Facebook, LinkedIn and Twitter.  All the data is referenced from the companies themselves or studies they have commissioned.

Facebook



  • 1 billion – Number of monthly active users on Facebook, passed in October 2012
  •  31% - Percentage of users that check in more than once a day.
  • 135 million – Number of monthly active users on Google+.
  • Facebook accounts for approximately 26% of referral traffic.
  • 47% – Percentage of Facebook users that are female.
  • 29% - Percentage of Google+ users are female.
  • 40.5 years – Average age of a Facebook user.
  • 2.7 billion – Number of likes on Facebook every day.
  • 24.3% – Share of the top 10,000 websites that have Facebook integration.
  • 4.7 billion minutes are spent on Facebook daily

LinkedIn


  • 187 million – Number of members on LinkedIn (Sept, 2012).
  • 44.2 years – Average age of a Linkedin user.
  •  Highly targeted professional networked audience with 50% of LinkedIn users having a bachelor’s degree or higher
  • LinkedIn accounts for about 0.20% of referral traffic.
  • American users spend an average of 17 minutes on the site.
  •  22 million visit LinkedIn every day.
  •  There are 2 million companies on LinkedIn

Twitter

Twitter


  • 200 million – Monthly active users on Twitter, passed in December 2012.
  • 1 million accounts are added to Twitter every day
  • 9.66 million – Number of tweets during the opening ceremony of the London 2012 Olympics.
  • 175 million – Average number of daily tweets sent throughout 2012.
  • 37.3 years – Average age of a Twitter user.
  • 40 million visit Twitter daily
  • 307 – Number of tweets by the average Twitter user.
  • 51 – Average number of followers per Twitter user.
  • 163 billion – the number of tweets since Twitter started, passed in July 2012.
  •  $259 million is Twitter’s projected ad revenue in 2012
  • 123 – Number of heads of state that have a Twitter account.
The rate of adoption of these services remains an extraordinary phenomenon and have become key drivers of income for both themselves and a myriad of businesses utilising their technology and market concentration.

Thursday, December 13, 2012

Will the 4G Auction exceed £3.5bn? You bet!


Although Chancellor George Osborne is unlikely to raise anywhere near to Gordon Brown’s £21bn for 3G licences, I believe he may do rather better than is being reported.  Recent auctions in Europe have raised £3bn which suggests that the UK 4G licences will achieve £3.5bn.  However, it is suggested that Private equity firms, retail groups and banks as well as international telecom players have entered the fray alongside the UK’s Big Four mobile operators, EE, Vodafone, O2 and Three to win a slice of these 4G licences.

Ed Richards, chief executive of Ofcom, the telecoms watchdog, said that it had “fired the starting gun” on an auction process that would “release crucial capacity to support future growth, helping to boost UK productivity”.  Even in these recessionary times, the total U.K. Internet traffic is currently projected to increase by an average of 37 per cent each year until 2015. Although the majority of traffic remains on fixed networks, traffic on mobile networks is growing at a faster rate of 84% year-over-year and is expected to account for 11% of total traffic by 2015. 

Mobile access to the Internet accounts for even more time spent browsing, communicating, and transacting than this traffic data suggests, with fixed lines being used to consume bandwidth-intensive services such as video and mobile used more for social media. Indeed, the Internet is increasingly being accessed on mobile devices, whether through mobile connections or Wi-Fi networks, and the next generation of mobile communications of 4G will continue to shift this dynamic and economists have suggested could add as much as 0.5% to GDP in infrastructure investment alone not to mention the enhanced capability of the mobile web usage.

The superior quality of coverage and speed of performance offered by 4G’s 800MHz network offers significant improvement over the current 3G licences.  This superior system will add significantly to the opportunity for both network providers and service providers to make significant revenue.  The UK is one the largest mobile internet markets and consequently the opportunity  to own a slice of this market will in my opinion drive the value of these licences far beyond the projected £3.5bn and along with the additional impact 4G will have on the UK economy, the auction could bring a welcome additional windfall for Mr Osborne.

Tuesday, December 04, 2012

Guess who will ultimately pay Amazon’s UK Taxes?


The recent moral crusade waged by the Press to get the major American Corporations such as Starbucks, Amazon and Google to pay more tax may seem at first sight entirely laudable.  Previous campaigns by Occupy Wall Street to embarrass the likes of Vodafone and Top Shop to pay more tax have had little effect.  Certainly the press didn’t seem that interested in pursuing Sir Philip Green as much as they seem to wish to pillory Google.  Maybe we are only affronted by foreign companies that appear to be ripping off the state and are quite happy for home grown companies such as Arcadia and Vodafone to avoid their share of the tax burden.

In truth, none of the companies are to blame.  Rather the issue lies not with the smart accountants exercising their abilities to save businesses millions of pounds in tax, but rather in the labyrinthine tax system Government have evolved not only domestically, but internationally.  Government not only use the tax system to generate income to spend on the defence of the Realm and the NHS but they also use it to achieve certain strategic and tactical objectives such as encouraging investment by foreign nationals to create jobs.  Equally foreign powers user their tax systems to attract companies to their jurisdictions.

So just how much is George Osborne missing out on.  So let us examine the case of Google.  Last year Google paid £6m on revenue of £395m. However, the UK is the largest online ad market in Europe and Google is the largest player in that market and given that Google’s EMEA (Europe, Middle East and Africa)  operations generated €12.5bn (£10.1bn), the Google’s UK turnover was in the region of $4bn (£2.5bn) and paid just £6m in corporation tax.  Google has located its European headquarters in Dublin where Google Dublin employs 2,500 employees to take advantage of Ireland’s favourable capital arrangements and consolidates its ad revenue through this subsidiary. Google Ireland had pre-tax profit of just €24.3m last year on turnover of €12.5bn.  Google’s consolidated accounts suggest a different picture of earnings generated by their operations of $11.7bn on just under $38bn turnover.  This suggests the true profit contribution from Europe should be in the region of $3.85bn.  By the same logic, the earnings contribution for the UK market would have been $1.2bn or approximately £800m profit or equivalent to £208m in Corporation Tax.  So there we have it, the UK Treasury is missing out on just over £200m in corporation tax.  Given that Google employ just 1,500 people in the UK there can hardly be said to be a jobs bonus whereby we are getting significant PAYE revenues instead.

Amazon is a somewhat different case.  Amazon is the largest on-line retailer in the world and has come to dominate the market.  However, in the UK, Amazon generated sales of £3.35bn, 25% of Amazon's sales outside of the US and paid just £1.8m in Corporation Tax to the UK Treasury.  However, in fairness to Amazon they have created 15,000 jobs in the UK, which is ten times that created by Google, and make a far smaller margin on sales of the many products they ship from Books to Microwaves.  Equally those 15,000 employees are significant payroll taxes and Amazon continues to invest heavily in the UK infrastructure.  Yes Amazon the Luxembourg holding company rouse, but I believe their contribution to the country is far greater than Google’s.

Harmonisation of the European Union’s Tax laws as regards companies and individuals would eliminate many of these distortions and clamping down on the BVI (British Virgin Island) corporations would also eliminate many of the tax loop holes that multi-national companies take advantage of.  However, these are probably a step too far for most UK based politicians and unlikely to occur.  
The HMRC could go toe-to-toe with these multinationals and try and get more out of them.  However, the most likely outcome is the cost of all these activities will eventually be borne by the consuming public through increased prices.  Years ago we used to consider that the same cost of a good in the US compared to the UK was on the basis what cost $100 in New York cost £100 in London.  Much of this Atlantic Margin has been eroded by the Internet and in no small part in the role taken by both Amazon and Google.  So I would not necessarily jump to the conclusion that we would be better off if the HMRC managed to get more out of the likes of Amazon and Google. 

Tuesday, November 27, 2012

Does Searle’s Chinese Room argument establish that the mind is not a computer program?


I recently looked at the case for artificial intelligence and I revisited the arguments laid out by John R Searle in his argument that so called strong artificial intelligence cannot evolve from a computer program.  Using the Chinese Room (CR) argument advanced by John R Searle, I examine whether his contention that the mind is not a computer program is true in the context of what constitutes artificial intelligence (AI) and thus the computational theory of the mind is false.  In order to do this I will establish the agreed basis of artificial intelligence, outline the Turing Test for evaluating machine intelligence, describe the CR thought experiment and then evaluate the Seale’s principle claim that the CR experiment is merely a syntactic process requiring no semantic understanding and demonstrating strong AI to be false.

In order to evaluate Searle’s CR argument we first need to have a clear understanding of artificial intelligence and in particular the form of AI, so-called ‘Strong AI’, at which Searle is directing his thought experiment.  Strong AI is the philosophical thesis that appropriately programmed computers have minds in exactly the same sense that we do.

During the 1950s, Alan Turing proposed a simple test to evaluate whether a machine is making an adequate simulation of the human mind.  The ‘Turing Test’ states that ‘if a computer can pass for a human in online chat, we should grant that it is intelligent’. 
This leads us to divide AI into four specific categories:-

                AI1         Computers are capable of thought;

                AI2         Only computers are capable of thought;

                AI3         A machine can think simply as a result of instantiating a computer program;

                AI4         Computer models are useful in the study of the mind.

Clearly AI4 is the weakest form of AI and is not considered to be particularly controversial. However, the AI argument builds from AI3 through to AI1 as the claims become stronger. Indeed if AI2 is true then we are all computers!  The combination of AI1 and AI3 suggest that a thinking machine is possible and all we need to do is write the appropriate program and run it to demonstrate a thinking machine (Wilkinson, 2005, pg 100). Strong AI of this form suggests that a suitably programmed computer can understand natural language and actually have other mental attributes similar to humans whose abilities they mimic. For this reason if the AI3 form of AI can be undermined the entire strong AI argument is invalid.  Consequently AI3 is the form of AI that Searle considers ‘strong AI’ and which his CR argument is intended to show to be false and hence the mind is not a computer.

Searle’s CR experiment is a thought experiment which imagines an English speaker who knows no Chinese, locked in a room full of boxes of Chinese symbols (the database) together with a book of instructions for manipulating the symbols (the program).  Imagine the people outside the room send in other Chinese symbols (data input) which, unknown to the person in the room are questions in Chinese.  By following the instruction book, the man is able to pass out Chinese symbols and answer the questions correctly (the output).  Searle contends that the man in the CR has actually passed the Turing Test for understanding Chinese without having to understand a word of Chinese.  Put simply the Searle argument may be sum up as follows:-

Premise 1.           If ‘strong AI’ is true, there exists a program for Chinese such that if any computing system runs that program, that system may be considered to understand Chinese;

Premise 2.           The program may be operated by anyone, without understanding Chinese;

Conclusion.         Therefore, ‘Strong AI’ is false (from premise 2).

Premise 2 conforms to the CR and as such the inevitable conclusion is that running a program does not constitute understanding.  Searle’s central claim is that the CR experiment shows that it is not possible for syntax to result in semantics.  Syntax in this context refers to the way in which the Chinese symbols are manipulated as opposed to semantics which relates to the meaning of the symbols.  Essentially the program is purely a syntactical symbol system which merely manipulates the symbols and entirely lacks semantic properties without any understanding of their meaning (Wilkinson, 2005, pg 105).

Before I consider the case against Searle’s CR experiment, it would be useful to describe the programming process.  For a process to be programmable, the process must be able to be constructed as an algorithm.  For a process to be algorithmic the following must apply:-

1.                   Every step is specifiable entirely without ambiguity;

2.                   At every step, there is no ambiguity about what the next step must be: no insight, inspiration or creativity is needed;

3.                   Provided each step is correctly executed, the procedure will produce the desired result in a finite number of steps.

All computers, whether based on traditional step-by-step von Neumann architecture or parallel processing, are programmed in an algorithmic manner (Wilkinson, 2005, pgs 100 - 101).  The key issue is the ability to ‘frame’ the question in an algorithmic format to enable the computer to work.  Searle considers the computer program based on the algorithmic formulation to be purely syntactic, therefore, unable to be semantic and hence computer programs cannot produce minds.  Searle’s contention may be shown as follows:-

Premise 1.           Programs are purely formal algorithmic processes (syntactic);

Premise 2.           Human minds have mental contents (semantics);

Premise 3.           Syntax by itself is neither constitutive of, nor sufficient for, semantic content;

Conclusion.         Therefore, programs by themselves are not constitutive or sufficient for minds.

Premise 3 supports the CR thought experiment.

Now there are a number of criticisms to Searle’s CR argument against Strong AI, however, they are essentially variants on the so-called ‘System Reply’.  Essentially the counter argument is that no single element should be considered as the ‘mind’ of the CR, and certainly the man at the centre running the process does not understand the Chinese language, but rather it is the whole system operating together that develops an understanding of Chinese and hence a semantic grasp of the Chinese symbols.  Searle replies that if the man in the room were to memorise the instruction book and the database of Chinese characters and become the entire system, he still would not be able to attach any meaning to the formal symbols even though he is now the entire system.

Others criticise the fact that the man in the room is deprived of any sensorimotor connection to the world and that these are vital missing factors.  Searle counters that the Chinese characters could be the outputs of a television camera and the outgoing symbols be the commands to a robotic arm. He now has a connection, but still no understanding.

Others suggest that the CR does not model the way the brain works as complex neural network.  Searle counters by applying Ned Block’s Chinese Gym thought experiment, whereby millions of people in a huge gym are connected to one another by walkie-talkie radios passing instructions to one another acting as individual neurons participating in a neural network.  On the other hand, the same issue regarding semantics applies in this case as well.  The Chinese Gym does not understand Chinese any more than the CR.
Daniel Dennett’s challenge to the CR is based on the issue of complexity.  He contends that Searle’s experiment is far too simple and that this is the reason for no apparent understanding being present within the system.  Dennett argues that with increasing complexity you will get unexpected results resulting in radical changes in behaviour known as emergent properties which only occur when a system is sufficiently complex.  Such radical changes in behaviour or properties of complex systems are common in the natural (Wilkinson, 2007, pg 113-114).  Consequently, Dennett argues that any system capable of conversing in Chinese would most likely be far more complex than the CR experiment and it would be difficult to say confidently that the computer system did not understand Chinese.  Searle contends that running increasing complex programs are no more than more complex algorithms and are just as syntactic as previously described and not capable of semantics however complex.  Whilst I agree that a complex computer program is just an assembly of multiple simple routines, increasing complexity often leads to increased computing capability and can result in unexpected capabilities and even outcomes.

In the same vein, the challenge posed by Patricia and Paul Churchland suggests that the issue is one of interpretation and speed, in that the CR experiment is operating at a very much slower speed than the brain operates and as such ‘understanding’ cannot be detected.  They use the analogy based on Maxwell’s theory of light being made up of electro-magnetic waves.  In their thought experiment, a man stands in a darkened room and waves a magnet up and down.  Although light is indeed made up of electro-magnetic waves, no light would be detected.  The man waving the magnet does not disprove Maxwell’s theory that light consists of electromagnetic waves.  The missing component is speed.  The Churchlands’ thought experiment slows down the waves to a range to which we humans no longer see them as light.  By trusting to our intuitions in the thought experiment, we falsely conclude that rapid waves cannot be light either.  Similarly, the Churchlands’ claim that Searle has slowed down the mental process to a range we humans no longer think of it as ‘understanding’.  Thus the Churchlands contend that the same applies to Seale’s experiment and that if we were to meet the man from the CR who seemed to converse intelligently in Chinese, but was really deploying millions of memorised rules in a fraction of a second, it is not so clear that we would deny he understood Chinese (Pinker, 1997, pgs 94-95).

In conclusion, in consideration of the narrow interpretation of Searle’s claim that the mind is not a computer program I would accept his argument to be valid and that the computer program itself does not ‘understand’ as we interpret that word.  I found Seale’s dismissal of the counter arguments to be reasonable, with exception of Dennett’s and Churchlands’ arguments. The Churchlands’ thought experiment it is based on a very simple premise and easy to understand and Dennett’s emergent properties argument compelling.  Speed and complexity could be key factors in strong AI.  What is undoubtedly true is that the ‘Turing Test’ is no longer a sufficiently subtle evaluation of artificial intelligence.  Progresses in computing have advanced to a stage whereby it is entirely possible to converse with a computer and believe you are conversing with a human being.  The key issue in achieving a true computer ‘mind’ comes back to the framing issue.  This relates to the ability to program algorithmically beyond the essentially mathematical and logical tasks to consider such areas as intuition, belief or even love.  These functions of the mind are frequently considered irrational and illogical, but they are what make us human. The ability or functionality to program ‘illogically’ is probably beyond algorithmic programming of digital computers as we know them today.  Equally Searle’s argument hinges on our understanding of the semantics of language.  What do we mean by ‘understanding’ or ‘meaning’?  Or finally, is it the limitation of English as a language which is unable to provide an adequate explanation of differences and similarities between the mind and artificial intelligence?  Regardless, Searle’s valid dismissal of strong AI will not slow the pace of computing development and the likely move from the physical limitations of silicon-based computer architectures to bio-computers which utilise biologically derived molecules to perform computational processes.  The future of ‘strong AI’ probably lies in the rapid development, dare I say growth, of such bio-computers!

References
Wilkinson, RJ (2007) ‘Chapter 3 Monism (Conclusion) and Artificial Intelligence (Beginning), Robert Wilkinson Minds and Bodies, 2007, Open University Press.
Pinker, S (1997), Chapter 2, ‘Thinking Machines’, Steven Pinker, How the Mind Works, W.W.Norton & Company Ltd., 10 Coptic Street, London. WC1A 1PU

Copyright© John Tomany 2012