Search This Blog

Tuesday, October 23, 2012

Mozilla fixes security flaw in latest Firefox

Mozilla has released a fix for the latest version of its Firefox browser a day after it was withdrawn due to a security flaw.

The non-profit organisation said the vulnerability in Firefox 16 could allow a malicious website to capture web history, enabling hackers to see what websites people had visited.

Mozilla announced in a blog post that an update for Firefox for Windows, Mac, Linux and Android has been released.

The updated Firefox 16.0.1 is available through automatic updates and new downloads through the Mozilla download site.

Version 16 was withdrawn within a day of release. Mozilla said a limited number of users had been affected, but there was no evidence the vulnerability had been exploited by hackers.

However, Tal Be'ery, web researcher at security firm Imperva, said a proof-of-concept exploit for the vulnerability exists.

The flaw in Firefox 16 meant the browser was leaking a URL's data across domains by not restricting javascript’s “location” method, he said.

In theory, a user would browse to a malicious exploit site, the attacker would open a new window in Twitter from the attacker site, anyone signed into Twitter would be redirected to a URL that contains a personal twitter ID, and this would enable the attacker to query the new window on the URL and obtain the victim’s personal Twitter ID.

On previous versions of Firefox, this attack would fail, but a regression in Firefox 16 allowed it to work.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Case study: Benetton rolls out shopper and staff Wi-Fi

The High Street has always been a hub of activity but increasingly it is becoming a hub of connectivity for the consumer hitting the shops.

A number of large retail stores have begun introducing technology to the shop floor to boost user experiences, as well as help shift more goods, from QR codes to mobile applications.

Now clothes outlet Benetton has embraced the trend and brought public Wi-Fi access to its customers.

“Originally we wanted Wi-Fi for mobile staff moving between offices,” said Mark Bishop, IT manager for Bencom Retail, which operates Benetton’s UK stores. 

“Also in our flagship stores we had a number of promotions and events going on, so Wi-Fi was useful for temporary internet access.”

Benetton installed a few ad hoc Cisco routers, but had not considered public Wi-Fi access, due to the security issues of having customers and staff sharing the same connections.

Benetton began conversations with On Direct to install the company’s managed Wi-Fi solution. This gives the IT department access to each of its access points and the ability to monitor activity piece by piece, meaning any performance issues can be picked up and addressed straight away.

It also provides traffic management capabilities so staff access can be prioritised over the public if they need it for work applications. Extra levels of security can be added in to keep the two types of data from crossing.

One of the tools Bishop rated was the cloud-based management controller, as it made the roll-out of the solution much simpler.

“Being the IT administrator for all our UK-owned stores and UK offices, I needed something that was easy to deploy,” Bishop said. “Some of our sites can be quite remote and the fact that On Direct offers a cloud-based controller means the only device to be rolled out is the access point.

“Either On Direct or I directly send the unit to the stores. Because the units self-configure from the cloud, the staff simply connect them to the network. Deployment takes just minutes for new access points and you don’t have to be an IT expert to work it out. 

"This drastically reduces deployment costs and time as we don’t have to rely on external contractors for installation.”

Often there are issues along the way, but the only one Bishop could cite was getting to grips with the number of management tools on offer, which could prove to be a little complex.

“The ‘nice’ challenge is identifying how to use all the various analytics and dashboard information,” Bishop said. 

“The level of usable information is very impressive for even the most basic user or administrator.”

The price was also appealing. Dispensing with the on-site wireless controller cut costs significantly. On Direct offers monthly rental schemes for the hardware and Benetton pays a licence fee for the dashboard application.  

The roll-out of the Wi-Fi network has received praise from staff and customers who are increasingly using the connections.

“With ever evolving connectivity and security, we found we could offer our customers usage of our bandwidth without affecting the performance of our internal systems,” said Bishop.

The IT department has also focused on taking advantage of the extra marketing opportunities the network log-in brings, but prides itself on not being too intrusive.

“The open public network is free to all within range and directs users to a customised Benetton splash page welcoming them to the store or brand,” explained Bishop. “This is a good opportunity for us to promote any in-store or corporate activities.”

“Once a disclaimer is agreed, we initially re-direct to the Benetton website but then allow virtually unrestricted web usage thereafter, which is what most people want when they are on the move or shopping. 

"Rather than interrupting the users experience with constant splash pages, we prefer to use a far less intrusive banner which sits on top of any website the user browses to.”

The internal network for employees is secured from the public, who blocked from accessing the local area network (LAN) it runs on.

The next step is to see how much more Benetton can exploit the promotional uses of the Wi-Fi network. Because of the easy deployment, Benetton also plans to use the system for more one-off events to attract attention.

“Our most recent use of Wi-Fi with the support of On Direct was to install additional access points at one of our flagship stores in Knightsbridge for one day, to support the international launch by our CEO of the latest phase of our current corporate advertising campaign,” said Bishop.

“As the hosting country, we asked On Direct to provide high-speed internet access to over 100 journalists and invitees to this major event. This was potentially a tricky task, considering the event was hosted in a flagship store which was trading right up until the morning of the event.”

“The event went off without a hitch,” he added, “and is a great example of how easy it is to set up managed Wi-Fi at a moment’s notice to provide on demand internet access.”


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Monday, October 22, 2012

Did IT problems force Santander out of RBS deal?

Spanish bank Banco Santander has pulled out of its multi billion pound agreement to take over 316 Royal Bank of Scotland (RBS) branches, and the customers associated with them, because of IT integration problems.

But sources say that IT problems can always be sorted out and the claims might be an excuse to pull out of the deal.

Partly-nationalised RBS has been forced to sell assets by the government, after being saved from collapse during the bank rescue package in 2008. 

Banco Santander is no stranger to huge projects to migrate customers to its system. It gains huge advantages by standardising its operations on its Partenon core banking platform. Acquisitions in the UK – including Abbey and Alliance & Leicester – were migrated to the platform.

Chris Skinner, chairman of the Financial Services Club, said that RBS has claimed all the data has been separated and it is a case of putting it on Banco Santander’s core banking platform Partenon. 

He said an Accenture report has revealed that the migration of the retail customer details will take until 2014 and business customers 2015. “They could be making excuses not to go ahead with the deal,” said Skinner.

The banks made the agreement in August 2010. A year later, a new target completion date for the final quarter of 2012 was set. While not commenting on IT directly, Banco Santander said in a statement, that integration targets would not be met: “It is now apparent that this revised target will not be achieved.

“Santander UK confirms that it has therefore notified RBS that it does not believe the conditions to the transfer of the business from RBS to Santander UK will be satisfied by the agreed final deadline of February 2013, and that it is not willing to agree a further extension to that deadline.   

"In that case, the agreement will automatically terminate in accordance with its terms and the transfer of the business to Santander UK will not take place." 

Banco Santander had a strategy to grow by acquisition and integrate the IT operations of the firms it buys to Partenon, which uses in-house middleware called Banksphere.

As well as rationalising IT, this creates cross-selling opportunities, improves customers satisfaction and operational performance. The platform uses a single database so all of a customer's relationships with the bank are automatically linked through a single view of customers.

Santander bought Abbey in 2004 and acquired Alliance & Leicester in 2008. It set a target of ?300m cost savings after integrating Abbey with Partenon and it planned to make efficiency savings of between ?30m and ?50m by integrating Alliance & Leicester with its core banking system.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Case Study: Groupon gets a remote voice with Natterbox

Remote working has been changing from a nice idea to part of the day job for many companies, especially in 2012. Be it the inspiration (or scaremongering) of the Olympic traffic situation or the influx of new devices hitting the shelves, working from home or on the move is becoming an option for more and more companies.

Online deal company Groupon has its UK offices in London Bridge so, when it came to the summer of sport, it decided now was the time to embrace remote working. However, it was the need for voice communications that got it examining new technologies to the market.

“We are predominately an online business so telephone interactions represent an important opportunity to get closer to our customers,” said Ash Mahmud, head of customer relationship management (CRM) at Groupon. 

“We needed a voice solution that was flexible, scalable and efficient, but one that would also greatly improve a customer’s interaction with our brand. You can’t put a price on customer experience,” he said.

With this customer focus, paramount for an online deal company depending on good experiences to get more business, Groupon was already using Salesforce.com’s CRM platform, so the solution it picked for remote workers had to integrate with the system.

“A traditional telecoms solution would not have met our expanding requirements and stifled our growth,” said Mahmud. “We required a new approach to voice that struck a balance between customer experience and business efficiency. It also had to be cloud-based with the capability for mass scalability.” 

Groupon decided to sign a deal with Natterbox to adopt its Voice Anywhere cloud PBX solution and its Voice Integration solution.

Voice Anywhere enabled calls from customers to be forwarded to any phone in any location. Employees who were working from home just needed a new SIM card to put into their handsets to make them compatible to the service – with no additional hardware or software downloads needed.

Operated from Natterbox’s datacentre, the security is handled at the network level and the integration can happen straight away with the CRM system, recording calls taken on mobile phones and updating records after the customer call.

“Natterbox is unique,” said Mahmud. “It uses the information within the CRM system to manage voice and automatically populates this voice activity back into the CRM system, against the correct contact, opportunity or account.

You can't put a price on customer experience

Ash Mahmud, Groupon

“This helps with monitoring call activity, providing training and analysing voice; for example, it highlights if we need more resources.”

The CRM manager said these abilities to track, report and analyse voice activity were “key drivers” when deciding on the technology, along with the concerns of lost time and inefficiencies that Groupon was having before the solution was installed.

“The remote working aspect has allowed staff to work around their personal lives [and] the automatic call diversion function allows the system to know who is available, thus maintaining a seamless customer experience,” he added.

Overall, the four areas Groupon believed had been improved by the introduction of the Natterbox technology was in voice management – both the personalisation side and customer experience – the efficiency of the firm, the reporting/analysis and the ability to train its employees.  

As the Olympics came to a close, Groupon was so pleased with the system, it continued to roll it out to its other UK employees and plans on continuing to work with the Natterbox as its own company grows.

"With a business of our size and scale, the project continues to evolve as our requirements develop and shift,” Mahmud said. “It was crucial we found a solution that had the flexibility to meet these demands and with Natterbox we have. This is a long-term relationship.”  


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Sunday, October 21, 2012

Home secretary Theresa May rules on hacker Gary McKinnon’s US extradition

Home secretary Theresa May is to announce a decision on self-confessed computer hacker Gary McKinnon's extradition to the US.

At the same time, Theresa May is expected to announce changes to the UK’s extradition arrangements with the US, which have come under increasing criticism for bias in favour of the US.

In September, home secretary May told Parliament the coalition agreement committed the government to reviewing the UK's extradition arrangements worldwide to ensure it operates effectively and in the interests of justice.

The review follows several incidents where UK citizens have been extradited to face trial for offences that would either not have been crimes in the UK, or which would attract a very small penalty.

Gary McKinnon admits accessing US government computers in 2001 and 2002, but denies causing damage and claims he was looking for evidence of UFOs.

US authorities have demanded McKinnon face justice in the US for what it calls "the biggest military computer hack of all time". The US government claims Gary McKinnon's actions caused $800,000 worth of damage to military computer systems.

The 46-year-old north Londoner, who has been diagnosed with Asperger’s Syndrome, has been fighting extradition since 2006.

McKinnon wants to face trial in the UK and, according to the Daily Mail, the home secretary is planning to introduce a measure that could make it more likely for UK citizens to be tried in a court.

The introduction of the so-called forum bar means a court hearing would have to be held to decide where a person should stand trial.

The home secretary could halt the extradition on human rights grounds based on the latest medical report. But if May allows the extradition to go ahead, McKinnon’s lawyers plan to apply for a judicial review to challenge the decision, according to the BBC.

A provisional hearing date has been set in the High Court for 28 and 29 November.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Security Zone: Passwords: Help users discover what is available!

Do your users get cold sweats when “change password” appears on their computer screen? Does the support desk still get swamped with users who have forgotten their current password? As company policy requires passwords to be stronger than ever, is it time to help employees cope with this task? Here are a couple of tips you could offer.

First, they can use the keyboard to generate passwords through patterns. Advise them to pick a letter they can remember. Just one letter; then pick a section of the keyboard where they can form that letter. 

For example, to make the letter V, they can start with the letter E and move diagonally down the keyboard to F, then B, back up to H before finishing at U. This produces a random password "efbhu". They can use the first and last letter as a capital to further secure the password – “EfbhU” – then add to that strength by extending the letter pattern into the upper row of numbers and special characters. 

Instead of starting with E, start with the number 3,and end with the 8. Now the password is 3EfbhU8.  

If this is not strong enough, make the 8 a special character by pressing the shift key to make the last character *. Now the password is 3EfbhU*.  

Or, better yet, capitalise the first two characters and make the rest lower case – making the password #Efbhu8.  

The flexibility of this method emerges when it comes time to change the password. This V-pattern, moves to the right or left and produces a new password, with equally good strength.

They could also take advantage of the fact that the PC keyboard – with the help of the operating system (OS) – adapts to different types of users, through the control panel.  

You can advise employees to choose regional or language options. Click on the “Languages” tab and change the keyboard structure to create a ready-made password producer. 

If you change the keyboard to “Dvorak – for left hand” the keyboard structure is changed to that keyboard type, which is not the same as the standard keyboard.  For instance, "Chris" typed through keyboard changed to Dvorak for left hand comes out as "Ghyok".

I used the same keys for both words, but the keyboard type changed the Standard English to the nonsense word above. So, if I wanted a password like "Homework", I would get "H.ibq.ya".

Once you add the keyboard, it can be placed at the top of the menu screen to allow users to switch and use their strong passwords, as required.

As security professionals, we are coming to understand that the best way to enforce policy is to make it easy and even demonstrate how it should be done. For passwords, this couldn’t be more straightforward: the answer is literally at your fingertips!

Chris Greco, CISSP, is an IT project manager

Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in October 2012

Cookie law needs more than ‘do not track’, says Neelie Kroes

EC vice-president and leader of the digital agenda Neelie Kroes says website owners still need to obtain consent to use cookies, even if web users have browsers that offer 'do not track' (DNT).

Kroes, who is seeking a way for web users to opt out of being tracked by cookies, has accepted browser-based systems will not deliver that result, according to Out-Law.com.

Representatives from top technology firms are currently working on developing a 'do not track' system for web browsers.

A working group of the World Wide Web Consortium (W3C) has been overseeing work by technology firms on a ‘do not track’ system for browsers.

But in a speech in Brussels, Neelie Kroes said she is increasingly concerned about what she sees as a delay to concluding the DNT standardisation process. In particular, she said she was concerned about watering down the standard.

“The DNT standard must be rich and meaningful enough to make a difference, when it comes to protecting people's privacy. It should build on the principle of informed consent, giving people control over their information. And, indeed, it must be designed to let people choose to not be tracked,” Kroes said.

But Kroes said the way the discussion is going shows that the DNT standard, on its own, will not guarantee complying with the EU cookie law, particularly because of the emerging consensus to exclude first-party cookies from the scope of the DNT standard.

"The fact is, we need, as far as possible, a simple and uniform way of addressing e-privacy – across different providers and different types of tracking. You shouldn't have every provider reinventing the wheel on this one," Kroes said.

"Going the whole way would be better than going half way. But going half the way together is better than leaving everyone on their own. Because it is a common approach, open and generative, fit for the global web.

"But, if DNT only goes half way, providers will need to ensure legal compliance beyond that. There will be a delta, things providers need to do to get valid cookie consent; on top of or beyond implementing DNT."

Kroes said those involved in DNT standard discussions "need to find a good consensus – and fast". She specifically called on US firms to be mindful of EU rules on cookies.

Earlier this month, a survey published by the University of California found that most US citizens reject online tracking and do not want any information collected about which websites they visit.

Asked what they would like a DNT function to do, 60% said they want it to prevent websites collecting any information about them.

One-fifth said such a tool should allow them to block websites from serving ads and 14% said they would like it to prevent websites from tailoring advertisements based on sites they had visited.

The study revealed that 20% were under the impression that advertisers were not allowed to track people when they browsed medical sites. Only a third correctly said they could be tracked by marketers.

In September, three months after the enforcement of the cookie law, a study revealed that only 12% of UK websites complied with the cookie law and had implemented prominent privacy notices with robust cookie controls.

The regulation on the use of cookies derives from an amendment to the EU's Privacy and Electronic Communications Directive.

The directive and related UK law came into force on 26 May 2011, but the Information Commissioner's Office (ICO) gave businesses 12 months' grace to comply.

However, a recent analysis of more than 200 top UK websites showed just over half at least have minimal privacy notices with limited cookie controls.

The study by data privacy management firm TRUSTe revealed that 37% of websites in the sample do not appear to have taken any steps to comply with the law.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Saturday, October 20, 2012

CIO interview: Frank Modruson, Accenture

Frank Modruson joined Accenture in 1987. He worked initially as an analyst on complex IT projects at customers. For the last ten years, he has been Accenture’s CIO responsible for the outsourcing giant’s own IT operation.

Modruson says as an outsourcing service provider, the company eats what it sells.

“We outsource IT to Accenture and use the same capabilities as our customers,” he says. All internal IT is completed by the different customer-facing parts of Accenture’s outsourcing business. 

“We aggressively apply the best practices we recommend to customers to ourselves.

“Where Accenture has the capability, we outsource to it. If we don’t have the capability, we outsource to a third-party,” Modruson says.

Like its customers, Accenture outsources the operations of IT to different delivery units, but the internal IT team retain control of things like strategy and planning. Accenture’s internal IT team is about 450 people.

Getting the most out of this team, by ensuring that IT staff are used and developed in the best way to benefit the company along with helping the business do more, is a key challenge for his role, he says. But Modruson says his main technology focus is video.

“A big area for me today is driving video adoption to help us reduce travel and how we use locations. The next two to four years will be all about video,” he says. This is part of the company’s plan to become a virtual corporation in terms of the use of video.

He says plummeting connectivity prices has made video a genuine business tool.

“The cost of video is dropping and it is becoming so inexpensive to communicate via video."

Accenture now uses video in both its internal operation and its services business. Through video, it can deploy consultants to clients in a more cost-effective way. 

“Our clients love being able to meet experts without them having to fly out for a one hour meeting,” says Modruson.

He says businesses should take a look at how consumers are using video: “We have it at home, but we have not yet brought it into the workplace. But it’s coming.”

“Ten years ago, doing what you do on Skype would have cost thousands of dollars. Now it is on everyone’s desktop and is almost free.”

Modruson says video is a more effective way to communicate than telephone. “We have had the phone for years, but what is missing is the visual cues.”

Accenture has been using video for internal meetings for years. “We have an IT steering committee that we used to get together twice a year for one day sessions. That group has not met in person since 2007

“We save a lot of money. We have one project where the savings in travel was 20 times the investment in video. It paid for itself in less than a month.”

Accenture has over 100,000 video end-accounts in its internal business.

Video is the current drive for Accenture's IT department because industry buzzwords like server virtualisation, flexible working, desktop virtualisation, bring-your-own-device (BYOD) and the cloud are already in hand.

Accenture has 98% of its servers virtualised and has been doing BYOD since the iPhone 2 and has 115,000 devices registered in its programme.

On flexible working, Modruson says: “We have been doing it so long, we forget it’s cool.”

When it comes to desktop virtualisation, Modruson says, Accenture does not do it simply because it does not need it. 

"I do not need virtual desktops because, in 2001, my predecessor decided to move to browser-based applications," he said. He added if he bumped into that person today he would give him/her a big hug.

With desktop virtualisation “you end up with two computers versus one,” says Modruson.

But he adds that if an organisation has not got the functionality required to enable applications to be accessed anywhere, desktop virtualisation is a very effective and fast way of doing so.

"It is a way to get to the same situation quickly, " he says.

On cloud computing, Modruson says, about 80 of 500 applications are in the cloud as a service but the company will move more and more to the cloud when appropriate.

Modruson supports IT for a company that has 257,000 people that are potential CIOs.  “The most common background of CIOs I meet are ex-Accenture consultants,” says Modruson.

Although this means that his job will be heavily scrutinised, he says it is a great benefit.

“I get a lot of free advice.

“It is great because we have an organisation and individuals that are excited about technology which challenges and pushes us. A demanding customer is the best you can have because they tell you what they need and what works.

“Telling me nice stuff is great, but the complaints are very valuable.”

In his role, Modruson also supports Accenture’s delivery business with advice and also communicates with its CIO customers. 

“Customers are curious about how we solve problems and we share that information with them.”


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

UK calls for video-conferencing standard

The use of video-conferencing by consumers and businesses may be increasing, but there is a need for a common standard to enable different services to work together.  

A Cisco survey found 79% of UK respondents want this standard to come forward as it is hampering their use of the technology. These expectations were reflected across Europe, with 81% calling for the standard – 35% of which said it was “extremely important” to their continued interaction with video-conferencing.

The biggest demand was for Microsoft to open up Skype – one of the most popular video-conferencing tools available – which it bought in May 2011, with 78% of those surveyed across Europe appealing for the change. This rose to 86% among UK respondents alone.

Keeping it closed was deemed “unfair” to its customers by 72% of the European figure, but 68% of UK respondents believed Microsoft would change its mind and make it a more open platform in the future.

“Consumers and business users alike are demanding that video communications solutions work together and that making a video call be as easy as making a phone call,” said Chris Dedicoat, president of the European region at Cisco.

“This is about freedom of choice, and the technology industry must rally around open standards, as it did so effectively with telephone service, the internet and email. Only with a truly open video community will we fully reap the economic and social benefits of this transformational technology,” he said.

Other figures from the survey showed 40% of those who used video-conferencing planned to use it much more in the next year, with only 4% expecting their use to dwindle.

Image: Thinkstock


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Scottish universities provide superfast rural broadband

Remote areas of Scotland are now able to access superfast broadband, thanks to a research project by two of the country’s universities.

The University of Edinburgh and the University of the Highlands and Islands have been working together for some time on the Tegola project, which aims to bring fast internet connections to the population of rural Scotland.

It had already established an academic network to join up the two universities, namely the Highland and Islands’ Gaelic College on Skye – Sabhal M?r Ostaig UHI – and Edinburgh. 

Now it has utilised the high speed connectivity to roll out connections to the surrounding islands of Eigg, Rum, Muck and Canna, as well into some rural parts of the Scottish mainland, thanks to the low-cost network of relays on hand.

HebNet, a local technology firm specialising in connecting rural areas, took on the task of building out the extensions to the network, but it will be the communities in the local areas who end up managing the connections.

The scheme has received backing from the Scottish government’s Community Broadband Scotland initiative, with Nicola Sturgeon, cabinet secretary for infrastructure, investment and cities, saying: “Broadband should not be considered a luxury in places like the Highlands and Islands; it is essential to enhance the quality of life of communities and to stimulate the growth of the local economy.”

Jem Taylor, head of strategy and development for the University of the Highlands and Islands IT team, added: “Commercial internet providers have so far failed to find an economical way to reach these remote, sparsely populated and often mountainous regions, meaning many are being left behind in the digital revolution.

“Now we’ve established the model works, it has the potential to be used in other rural communities.”

A launch event celebrating the new connections took place over the weekend and superfast broadband is available to residents now.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Friday, October 19, 2012

O2 blames network node for outage

An outage in O2’s network on Friday left 10% of its customers unable to use their phones on Friday.

O2 admitted the fault late Friday afternoon, saying one of the network nodes for connecting its traffic had fallen over. This meant O2 customers were unable to make or receive calls, send or receive texts or use their mobile data for several hours.

Although it only took O2 three hours to find a fix, the effect was still being felt well into the evening by its customers. O2 blamed the “rush hour” traffic it always gets in the early evening when many leave work.

“Due to high phone use during the 'rush hour' early evening period, customers may experience intermittent performance as full service comes back for everyone,” said O2.

“We would like to reassure those customers still affected that we are working as hard as we can to restore normal service to everyone.”

The mobile operator said the fault was not the same as that which took down the network in July, leading to a 24-hour blackout on O2's mobile network. 

O2 said it would not be offering compensation, as it had done in the previous incident.

When asked if it was sure the issue wouldn’t reoccur, O2 read from a statement: “We operate to the highest industry standards and using leading industry infrastructure. We will continue to challenge and assess what further steps we and our infrastructure partners can take to further improve our network performance.”

O2 said the fault is now fixed. If you are still having issues, O2 suggests restarting your handset.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

AMD cuts 20% of workforce

Chip maker Advanced Micro Devices (AMD) plans to cut as many as 2,340 jobs, or about 20% of its workforce.

The cuts are expected to be announced as early as next week, according to Bloomberg, citing an anonymous source as the plans are not yet public.

CEO Rory Read has already lowered the headcount since he was appointed in August 2011. In November 2011, AMD cut 10% of workforce to reduce costs.

Last week AMD announced it is expecting a 10% decline in sales in Q3 compared with Q2, ahead of the announcement of third quarter results on 18 October.

“The lower than anticipated preliminary revenue results are primarily due to weaker than expected demand across all product lines caused by the challenging macroeconomic environment,” AMD said.

AMD expects third quarter gross margin to be 31% less than the previous expectation of 44%. This is mainly because of an inventory write-down of around $100m, due to lower anticipated future demand.

Rival chipmaker Intel also reduced its Q3 forecast, citing reduced demand from the PC market and slowing demand from emerging markets.

In the third quarter of this year, total global PC shipments fell 8.3%  compared with the same period a year ago to 87.5 million, according to research firm Gartner.

In the next year, Gartner predicts tablet sales will increase 53%, but says 2012 figure of 118,883 will more than treble by 2016.

Windows 8, due to be released on 26 October, may boost flagging PC sales, but it may also further accelerate the sales of tablets and other new form factors because it is also designed to work on ARM processors developed for the next generation of mobile computing devices.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Thursday, October 18, 2012

Finance CIOs reveal plans to recover from recession

Research of 55 bank CIOs reveal how companies ravaged by recession are investing in IT to help them emerge stronger.

The financial crisis which began with the credit crunch in 2008 and the subsequent collapse of investment banking giant Lehman Brothers. It led to unprecedented change in global financial services.

A surge of regulation, falling profits, disgruntled customers, bankruptcy and even nationalisation have all become unexpected consequences of the 2008 crash.

The Financial Services Report research out by IT giant Fujitsu analysed the activities and plans of 55 banks across the retail, investment and wholesale banking sectors.

CIOs were asked for their top three IT priorities for the next three years. Over half (51%) listed reducing cost as a top priority, while 27% said upgrading IT systems, 22% improving customer experience, 20% mobile banking and 18% said moving to the cloud.

A total of 85% said the IT department is attempting to meet the needs of the business by doing more with less.

When it comes to moving to the cloud, 40% of all banks said they have already implemented cloud across their organisation, and 29% said they planned to do so. A significant 22% said they do not see the cloud as an enabler for change and will not move any systems to it.

For IT professionals hoping for careers in financial services IT and suppliers targeting services at banks, a good understanding of the core business processes are essential. 67% of the CIOs questioned said this was the most required skill. Infrastructure/enterprise skills were required by 33% of CIOs, specific software skills by 17% and 17% also said cloud computing expertise is important.

Mobile banking is high on the agenda. A significant 71% of the CIOs surveyed said mobile banking is important for customers compared to 49% that were asked the question three years ago.

The biggest overall benefit of mobile banking is the ability to generate new revenue streams, with 80% citing it as a key benefit. Better customer retention is a key benefit according to 76% of respondents.

The report said financial institutions in the UK find themselves at a pivotal time and that IT has an important role. 

“They need to reshape their ICT to reshape their business. New regulation coming on stream between now and 2014 will demand that retail banks and financial services organisations really get their houses in order," said the report.

"To meet these needs, IT will need to be fit to change. Flexibility, more analytics and better focus on the needs of the business and customers will underpin success. Solving platform complexity will be important.”


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Is wind energy just blowing in the face of datacentre cooling reality?

Google recently announced that it has signed an agreement to power its Oklahoma datacentre with 48MW of energy from a wind-farm that will be implemented later in 2012, bringing the company to a total renewable energy contracted level of 260MW, writes Clive Longbottom.

Great – at last, a major energy-guzzler is taking sustainability seriously and looking to renewable sources to power its datacentres.

But will using wind energy in datacentres really help address issues of carbon emissions?

It is pretty difficult to identify where a unit of power has come from. If a power socket or distribution system is plugged into a local or national grid system, the actual energy provided is from a pool of all generated power – and no differentiation can be made between energy that comes from renewables, gas, coal or nuclear. The only way to do this is to single-feed energy directly from the source.

The stated source for the wind power for Google is the Canadian Hills Wind Project. Its building started in May 2012 with a rated end capacity estimated at 300MW. But, rated capacity and real output is not the same.

The real output of a wind turbine is around 30% of its rated capacity, leaving the project with only about 100MW of output.

Why only 30%? The rated level assumes constant output with optimum wind speeds. This does not happen, anywhere – even in the windiest places. Imagine when there is a meteorological “high” over the region. With a high, winds are mild to non-existent, and energy will have to come in from elsewhere on the grid. Even if the wind blows enough to turn the turbines, with low wind speeds, datacentre energy efficiency is hit; optimum efficiency is only reached with wind speeds of around 34mph – the energy gap will have to be filled from elsewhere. 

At the other end of the scale, let us assume that there is a gale blowing. With winds above around 50mph, wind turbines have to be “parked” to prevent physical or electrical damage. While no energy is being generated from the wind turbines, then the needed energy has to come from elsewhere. The contracted energy is just being pulled from that pool of all types of power generation – it is not being provided specifically through wind power.

The wind turbine Google is planning to use was to be used to power around 100,000 homes in the local area – this will now have to be done with just 52MW of realistic output, or a little over one half a kW per house. The US Energy Information Administration (EIA) estimates that an average US house requires 11.496 MWh per year. But if just 52MW are allotted across 100,000 homes, it adds up to just 4.56MWh per home over a full year – less than half of estimated requirement.

Just where is the rest of the energy for those homes going to come from? Is it just a case that Google has shifted the “dirty” power usage from itself to the nearby householders?

So, is renewable energy all just smoke and mirrors, or is there actually a case for using it?

If the idea is for your organisation just to use wind power, then only go for it as a marketing exercise. Datacentre managers can tick the box on the sustainability part of the corporate responsibility statement, and hope that no-one questions it too deeply. You can close your eyes and ears to reality and fall for the smooth talk of the energy vendor who says that you are signing up for a pure wind-based contract. 

Any enterprise that is serious in its intention to use renewable energy must adopt a “blended” approach. 

Google uses hydro-power for its Dalles datacentre in Oregon and for the Mountain View datacentre, it has installed solar power. Google also owns two wind farms outright. Again, although solar power is not continuous, what Google is showing is a capability to blend its datacentre cooling approach – with the use of dams, hydroelectricity can be pretty much continuous, as the energy comes through water and gravity in a predictable manner. It is only in times of severe drought where hydro-power can run into problems. What Google cannot do is to use its solar output from very sunny days in California to power its Oklahoma datacentre in periods of low wind.

Google can be pretty choosy about who it signs contracts with and how these contracts are run. For an average organisation, this may not be the case.

Enterprises must ensure they pick an energy provider that can demonstrate that it has a blended energy generating capability. Mixing constant sources such as hydro or tidal with inconstant sources such as wind and solar means that there is a better chance of maximising the energy taken from sustainable sources and making datacentres more efficient and “green”.

IT executives must also read the small print carefully. Renewable energy bought under contract tends to be sold at a premium.

Investment in renewables is still expensive – and a lot of this is underwritten and underpinned by government initiatives. Make sure you understand what happens to that premium – is it for further investment in more renewables, or is a large chunk of it just for shareholder profits?

Will the supplier guarantee what proportion of energy supplied is generated from sustainable sources? For example, if the supplier has a total generating capability of 1,000MW, of which 500MW is wind, 100MW is solar and 400MW is hydro/tidal, the actual total is likely to be around 550MW or so of actual generated power capability when measured against rated capacity. This is an average capability – it may be capable of providing 600MW at some times, and as little as 350MW at others – the inconstancy of the power sources means predictability is difficult. If the totality of the supplier’s contracts comes to 1,000MW, it is short in real terms by 450MW – and each customer is only getting 55% of its energy from the supplier’s renewable sources. 

That then leaves the question of where is it bringing the extra power in from – other renewable suppliers (who will have the same problems) or from fossil fuel and nuclear sources?

Renewable energy is important and should be part of a datacentre’s power mix, but IT executives should not fall for the snake oil, smoke and mirrors and believe that everything signed up for will be from renewables. They must check the contract, make sure that the premium is reinvested in suitable new projects and that excess energy is sourced ethically and openly. 

Clive Longbottom is a Service Director at UK analyst Quocirca Ltd.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in October 2012

Rail franchise suspension proves cloud concept for operator Go-Ahead

The suspension of bidding for the Thameslink rail franchise acted as a proof of concept for operator Go-Ahead’s decision to move to a cloud-based collaboration platform.

The incompetent handling of the bidding for the West Coast rail franchise, which Virgin Trains lost to First Group only for the decision to be reversed after mistakes were revealed, led to the suspension of other ongoing bidding processes for rail operations, including the Thameslink rail service.

One of the bidders, Go-Ahead Group, said its recent decision to move to Huddle’s cloud-based collaboration platform has proved a good one in the light of the suspension.

Kevin Goodman, head of business excellence at Go-Ahead Group, says that, before using Huddle, when the group bid for a franchise it would have a group of people working separately on collecting all the information required. Bidding teams would be put together for short periods of intense work. These people would communicate via email and store important data on their own computers. Traditionally, all the data garnered would be dispersed at the end of the project and lost to the business.

“We would get to the end of a bid process and people and data would then be dispersed,” said Goodman.

The information in bids will include reports as well as data from the Department of Transport and information such as the results of passenger surveys.

The use of Huddle’s cloud-based collaboration software means all the work done by the team, including the latest version, is centrally stored and retained by the company. The suspension of the Thameslink bidding in the past would have meant releasing consultants from the project and losing all the work done. Today, the consultants can be released and come back to the project when bidding restarts without any information lost.

“We had to scale back our bid team in less than a week,” says Goodman. The expense of keeping consultants on for an indefinite period of time was not an option, due to high costs. “We would normally not have been able to disperse the team so quickly and would have lost all the information.”

“Now when the bidding restarts, we will just be able to pick it up where we left off.”

Goodman says that, while bidding project teams have remained the same size, with the Huddle service they are able to get bids up and running much quicker. The cloud-based collaboration ensures that anybody, regardless of location, can get involved and all participants know when they are working on the most up to date version of the bid.

Go-Ahead is an operator of rail franchises including South Eastern and Southern and runs numerous bus services.

Goodman said the rail division of Go-Ahead specified a requirement to retain corporate knowledge after a bid was over and the IT department, which itself used Huddle, recommended the cloud base service.

Goodman says the improvements to processes, not taking into account cost savings, are substantial. “The efficiencies we have seen already comes more from the fact that we can retain information.”

Hundreds of thousands of US public sector workers have access to cloud-based collaboration software from UK IT firm Huddle after two major US government departments signed up.

A version of Huddle used by the UK government, recently became available to the US government. The US government will start using the public cloud version, which is available to any business in the world.

Read the interview with Huddle CEO Alastair Mitchell


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Wednesday, October 17, 2012

VMworld Europe 2012: Key highlights and technology takeaways

Software defined datacentres, cloud management and automation, virtualisation licensing, mobile virtualisation, heterogeneity and the need for IT executives to develop new skills in the cloud era were some of the themes of the VMworld Europe 2012 conference.

“Cloud is a disruptive technology,” said VMware’s new chief executive Pat Gelsinger, in his opening keynote, setting the tone of the Right Here, Right Now conference, heavily focused on cloud computing’s role in datacentres and its automation.

The IT industry is in a fundamental state of change, driven by cloud computing, Gelsinger said. 

IT has been reactive to business needs a few years ago and it is now at a proactive state, but it needs to go one more level up, to an “inventive” stage where datacentre is managed by automation software, he added.

Today’s datacentre is “like a museum of IT” comprising legacy hardware, mainframes and layers of the databases, said Gelsinger. 

It is important to change how datacentres are built and operated, he added. A move towards a software defined datacentre will take the industry away from the legacy and hardware-centric infrastructure that is still in use.

A software defined datacentre is one where all the elements of the infrastructure – including networking, storage and security – are virtualised and delivered as a service, and the control of this datacentre is entirely automated by software, according to Gelsinger.

It will enable IT departments to support both private and public cloud systems using the same management tools, he added.

Gelsinger then unveiled VMware’s software defined datacentre strategy. VMware’s revised vCloud Suite, which includes vCloud Automation Centre, will help IT ensure that applications such as SAP, Oracle and Microsoft SharePoint and Exchange run well on VMware systems.

“A software defined datacentre could bring a high degree of agility and flexibility to the infrastructure,” said Tony Lock, analyst at Freeform Dynamics, a research and analysis company. 

“But businesses may not be fully ready yet for such a transformation.”

VMware also released a set of resources – called VMware Compliance Reference Architecture – including solution guides and design architectures to simplify compliance for applications in the cloud era.

Alongside cloud management, desktop management was another highlight at VMworld Europe 2012. VMware demonstrated its Mirage tool – which keeps centralised virtual copies of every end-point synchronised with the datacentre.

The tool allows the data to be held in a device-agnostic way, helping users access and share data on the devices of their choice. It deploys de-duplication over the network and in storage.

Amid the growing use of bring-your-own-device (BYOD) – or SYOM (spend-your-own-money) in VMware’s words – VMware acknowledged the need for users to access data from any device. Until now, its mobile virtualisation strategy was focused on Android operating systems (OS) even though Apple iOS is more popular.

Mirage will help users upgrades from Windows XP to Windows 7, while keeping all their data and files intact. Installing Mirage also means that when a user-device is broken or stolen, the data and apps can be retrieved from the central location. It allows users to access their Windows PC on Apple iPad device via a remote access client.

But VMware’s support for Apple iOS didn’t end there.

It also unveiled Horizon Suite to help IT executives manage workspace apps on consumers’ mobile devices – including Apple iPhones. The suite includes tools such as Horizon Application Manager, Horizon Data (previously called Project Octopus) and Horizon Mobile for iOS as its major components.

Steve Herrod, chief technology officer (CTO) of VMware said that the vCloud Director and vFabric Application Director tools will help IT manage rival virtualisation platform Microsoft Hyper-V, thus emphasising the importance and benefits of a heterogeneous IT environment.

He added that VMware has created a blueprint marketplace called the Application Management Marketplace to enable third-party software providers to add their own blueprints. The marketplace – that will be like an app store for business IT - is currently in beta stage.

VMworld Europe 2012 even had tidbits for hardcore virtualisation professionals too. Herrod reminded users that the supplier has “killed off" vRAM licensing with the release of vSphere 5.1.

The erstwhile licensing policy set licensing levels at lower RAM limits, forcing users to buy more licences and spend more money.

Following an uproar from customers, VMware raised the vRAM limits per licence but the limited increase did not go down well either. It finally axed the much-maligned scheme at VMworld 2012.

Separately, virtualisation product vSphere 5.1 will feature a more expanded "monster VM". Version 5 of vSphere, introduced last year, had the monster VMs or virtual machines with huge memory capability. Version 5.1 features even larger virtual machines – “more muscular monster VMs” – up to 64 vCPUs per VM and 1TB of vRAM.

Experts also emphasised that IT executives need to become brokers of services for the cloud. 

“IT professionals need to move away from being IT builders to IT service brokers in this cloud age,” Ramin Sayer, VMware vice-president for cloud and virtualisation management products, told ComputerWeekly, reiterating Gelsinger's message at the event.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

M2M technology ushers in the age of total connection

I’m friends with my toaster on Facebook and my laundry sends me a message to tell me when it has dried,” says Jari Arkko, internet architecture expert at Ericsson Research.

Arkko is showcasing his “connected home” at the company’s research labs in Stockholm. He says the technology for ubiquitous connected devices is already here.

“When I check my incoming events feed I see things like the laundry is dry and my toaster is toasting,” he says. Sensors in the room plateau when his washing has finished drying, or his toaster has turned off, which activates an algorithm to notify him on Facebook.

“It is a simple user interface, even though it sounds crazy, it’s more natural as it provides the opportunity to have information in one go.”

The number of connected devices is set to double over the next eight years to 50 billion globally, according to the trade body for mobile operators GSMA and Machina Research. If these estimates are correct, communication between these connected devices will explode.

Telematics manufacturer Openmatics is the latest business to sign up to M2M technology, having partnered with Orange to offer telematics services for trucks and buses. Under the deal the location and status of trucks and buses can be monitored, enabling fleet operators to plan and manage their businesses more efficiently.

Data will be recorded and received by an on-board unit and will be transmitted over the network via a web-supported portal.

Orange Business Services will provide Openmatics’ communications infrastructure for data transfer, ordering, activation and tracking through its international M2M centre in Brussels.

Arkko says that, in the future, consumers and businesses will have fully networked environments using social media platforms as the common interface: “We now have the capability to use the networks we have and the IP to communicate where we need to go,” he says. “Social media is not perhaps as widely deployed as it could be. Anything that you need to know and interact with could be represented in this manner, such as the copy machine telling me it’s broken and needs to be repaired, to more abstract things.

“There are a lot of programmers that can do web-based applications, using platforms such as Facebook or Google. It is surprisingly easy. Now any school kid with programming expertise can link things up. I have 200 ports in my house, but that is not enough for me. I’ve run out! We are on the brink of the networked society, many of the tools need for building these things are here.”

Open source, wireless LANs and next-generation networking tools such as IPv6 will be key in arriving at this point, he says. “The boom will come when people realise the fun we can have with these things.”

Certain industries are already seeing an uptake in M2M, particularly where regulations are driving adoption.

“M2M can support and enhance business processes”
Dan Bieler, Forrester

The UK government has set a target to have 53 million smart meters in homes and business by 2020. Key to this is the use of M2M technology as the meters will communicate with a central datacentre, which then sends messages to the utility companies, which will alert the user about their energy usage. Smart metering is already widespread in Italy, Sweden and France because of regulatory enforcement. Ana Tavares Lattibeaudiere, director at GSMA, says the connected automotive sector is another area that will grow fast. In 2014, e-car regulations will start taking effect, where cars will have to automatically send out information about their location after a crash.

Other sectors include the monitoring of chronic diseases such as diabetes. Diabetics who monitor their blood sugar levels can have their results sent straight to their doctor.

Lattibeaudiere says such an automated process would be far more user-friendly: “The problem with email is that it tends to get neglected or lost and doctors often don’t have time to look at them.” Dan Bieler, principal analyst at Forrester, says there is a difference between the M2M and consumer environment and enterprise. “From the consumer end it’s about having a lot of apps in the home, communicating with

Certain industries are already seeing an uptake in M2M, particularly where regulations are driving adoption.

The UK government has set a target to have 53 million smart meters in homes and business by 2020. Key to this is the use of M2M technology as the meters will communicate with a central datacentre, which then sends messages to the utility companies, which will alert the user about their energy usage.

Smart metering is already widespread in Italy, Sweden and France because of regulatory enforcement.

Ana Tavares Lattibeaudiere, director at GSMA, says the connected automotive sector is another area that will grow fast. In 2014, e-car regulations will start taking effect, where cars will have to automatically send out information about their location after a crash.

Other sectors include the monitoring of chronic diseases such as diabetes. Diabetics who monitor their blood sugar levels can have their results sent straight to their doctor.

Lattibeaudiere says such an automated process would be far more user-friendly: “The problem with email is that it tends to get neglected or lost and doctors often don’t have time to look at them.” Dan Bieler, principal analyst at Forrester, says there is a difference between the M2M and consumer environment and enterprise.

“From the consumer end it’s about having a lot of apps in the home, communicating with

each other and elements of smart grids, such as electric and water meters. This includes applications in healthcare, monitoring heart rates which are linked up to a central database, so doctors can get in touch and inform patients they should have a check-up,” says Bieler.

“In the enterprise space there are a number of solutions and metering in the context of facility management. It’s being used to monitor areas such the oil industry, measuring holes in pipelines, asset tracking is another big area, as it is being used to track devices in hospitals.

“The range of applications is extremely widespread. It is quite horizontal in nature, being used to monitor, meter, navigate and notify in all sorts of sectors. But having said that, M2M surprisingly is still not a top priority. Our research found only one in 10 sees it as an important area.”

One reason for this is that the return on investment is often hard to demonstrate. In austere times the focus of the CIO is often on how to cut costs and reduce headcount: “It’s difficult therefore to come up with the required funding,” says Bieler.

“In some instances, companies are struggling to define which processes they want to support with that solution. But it is clear that interest is growing.”

One issue for the deployment of M2M is the lack of seamless connectivity for wireless and patchy 3G coverage. The GSMA says spectrum will be crucial in achieving a more networked economy, supported by a sufficiently flexible regulatory environment in the telecoms sector and in other industries. In the next four years the mobile industry will invest $793bn in expanding the coverage and capabilities of mobile networks, according to GSMA.

Networks provider Ericsson forecasts mobile data traffic will grow tenfold between 2011 and 2016. But even with new spectrum, mobile operators will need to be able to manage the fast-rising tide of traffic on their networks, both to deal with congestion and tailor delivery to specific service requirements, it warns.

Bieler says lots of small bits of data can add up to significant amounts, which can easily be underestimated. Smart metering in itself does not include a huge amount of data, but in theory it involves millions of meters sending information every 15 minutes.

Regular upgrades will have to be part of the broader M2M scenario, he says, raising the question of who should bear that cost. At the moment it looks like it would fall on carriers. “It will be a combination of developments, additional broadband technology, HTML5 will help and IPv6 of course. Plus the recognition that it can support and enhance business processes. There are a number of possibilities where M2M can play a role, such as automating processes that, in the past, would have been handled by people,” he says.

Bieler believes that M2M will eventually become embedded into business processes.

“I don’t see one part tipping the move to M2M, but several step changes, such as the 2014 EU regulations for cars that will certainly have a large impact on the automotive sector.”

Warren East, CEO of microprocessor company ARM – which manufactures embedded chips for M2M technology – recently told Computer Weekly the initial growth of this market will be slow. “Over the next five years I’m not expecting there to be much take-up – the opportunity will be over the next decade,” he says.

Bieler agrees. “In the next five years enterprises will start to see the value in M2M, it will be seen as a necessity and start to become more widespread,” he says.

Picture Credit: Thinkstock


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

This was first published in October 2012

US prepares cyber offensive capability

US intelligence officials say there is growing evidence Iran was behind recent cyber attacks that disabled computers across the Saudi oil industry.

These attacks contributed to a warning last week from Defense Secretary Leon Panetta that the US was at risk of a "cyber-Pearl Harbor,” according to the New York Times.

After Panetta's remarks on Thursday, US officials described an emerging shadow war of attacks and counterattacks already under way between the United States and Iran in cyber space, the paper said.

There is no evidence the attacks were approved by the Iranian government, but several news agencies report that US officials suspect the involvement of the Iranian military’s cyber unit. The Iranian military set up the cyber unity in 2011, and US officials suspect it of orchestrating the attack in August on rival Saudi Aramco, the Saudi state oil company.

Around 30,000 computers on Aramco’s network were infected with the Shamoon virus. The malware wiped all files on the computers by overwriting them.

Panetta said Iran had undertaken a concerted effort to use cyberspace to its advantage. Defence analysts interpreted that as meaning Iran had discovered a new way to harass much sooner than expected, and the US was ill-prepared to deal with it.

According to the New York Times, Iran has a motive to retaliate for the US-led financial sanctions that have cut its oil exports nearly in half, and for the cyber campaign by the US and Israel using Stuxnet against Iran's nuclear enrichment complex at Natanz.

The US has never officially acknowledged its role in creating Stuxnet and Panetta avoided using the words "offence" or "offensive" in the context of US cyber warfare. But he did say cyber defence alone would not prevent an attack.

"If we detect an imminent threat of attack that will cause significant, physical destruction in the United States or kill American citizens, we need to have the option to take action against those who would attack us to defend this nation when directed by the president," said Panetta.

"For these kinds of scenarios, the department has developed that capability to conduct effective operations to counter threats to our national interests in cyberspace.”

The US defence department had developed tools to trace attackers, Panetta said, and a cyber-strike force that could conduct operations via computer networks. The department is also finalising changes to its rules of engagement that would define when it could confront major threats quickly.

"Potential aggressors should be aware that the United States has the capacity to locate them and hold them accountable for actions that harm America or its interests," said Panetta.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Tuesday, October 16, 2012

EU advises Google to rethink privacy policy

The EU is to tell Google to change the way it gathers information on users to reduce the risk of infringing on their privacy.

After a nine-month investigation into Google’s business model, which depends on advertisements tailored to users based on personal data, the EU’s data regulators have drawn up 12 recommendations, the BBC reported.

The investigations followed concerns raised by the regulators in March when Google started combining data from several sites to target its advertising, consolidating 60 privacy policies into one.

Google has maintained the policy complies with EU law, but French regulator CNIL was tasked by the EU to investigate the policy.

The report stops short of declaring Google's data gathering practices illegal, but sets out 12 measures the company must implement to satisfy EU privacy concerns.

The recommendations are said to include a focus on personal information and browsing records, and the collection of location-based data and credit card details.

Analysts said Google should have realised that changes to its privacy policy would conflict with EU law, which highlights the fact that its business model relies on collecting personal data.

In July, Google was forced to submit a revised set of proposals to address the concerns of Europe’s competition authorities.

EU competition commissioner Joaquin Almunia asked Google to clarify some elements of the proposals submitted at the beginning of July.

In May, Almunia set a July deadline for Google to respond to four areas of concern about the company abusing its dominant position in its online search rankings.

The call for clarification suggests that Google's first set of proposals did not go far enough to address those concerns.

In June, the firm's executive chairman, Eric Schmidt, said Google disagreed that the firm had done anything to breach EU antitrust law.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Yahoo claims second Google executive as COO

Yahoo has appointed Henrique de Castro as chief operating officer (COO), just three months after appointing another Google executive, Marissa Mayer as chief executive officer (CEO).

De Castro is to oversee global sales, operations, media and business development, Yahoo said in a statement. 

Mayer, who has appointed new personnel to several senior management roles at Yahoo, said Henrique de Castro’s operational experience in internet advertising and his success in structuring and scaling global organisations made him a “perfect fit” for Yahoo.

Yahoo has been in a state of nearly constant turmoil since it rejected Microsoft's $44bn takeover offer in 2008.

Since then, it has been losing ground to Google in the search market and its email business has been hit by the rise of social media companies.

It is looking to re-establish itself as a leader in digital advertising in the face of strong competition from Facebook and Twitter.

Yahoo's share of US online advertising revenues fell to 9.5% last year, down from 15.7% in 2009, according to the BBC.

In July, Yahoo’s board said Marissa Mayer's appointment signaled a renewed focus on product innovation to drive user experience and advertising revenue.

The appointment was announced just over a week after interim Yahoo CEO Ross Levinsohn helped end a patent battle with Facebook with an advertising partnership deal.

The initiative was widely expected to win Ross Levinsohn a permanent appointment as CEO, particularly after the withdrawal of rival Jason Kilar, head of the Hulu video streaming service.

Levinsohn was appointed interim CEO in May after Yahoo CEO Scott Thompson resigned over inaccuracies discovered in his corporate biography.

Thompson's departure came just four months after taking over as chief executive from Tim Morse, who had held the role on an interim basis after Yahoo's board removed Carol Bartz as chief executive in September 2011, after only two and a half years.


Register now to receive ComputerWeekly.com IT-related news, guides and more, delivered to your inbox.By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy