Can HP Survive the Post-PC Revolution?
> December 29, 2012
> Can HP Survive the Post-PC Revolution?
> By: Sean Patterson | Staff Writer
> On November 20, 2012, Hewlett-Packard (HP) announced its second quarterly write-down of over $8 billion dollars in a row. That in itself is cause for alarm, but the fact that HP blamed the write-downs on two of its largest recent acquisitions means the company could be flailing to find a foothold in a world where PCs and printers are quickly becoming niche products.
> In 2012 the market for tablets and mini tablets exploded as Apple, Amazon, Google, and other manufacturers bet big on the technology. And it's worked. Global shipments for tablets are expected to rise to 210 million in 2013, beating estimates for PC shipments.
> Welcome to the the Post-PC era. Here's your tablet.
> Will the desktop PC ever truly be gone?
> Give us your predictions in the comments.
> HP's flirtation with mobile technology consisted largely of its acquisition of Palm in 2010. However, the company wasn't able to compete with Apple, and discontinued all of its webOS products near the end of 2011. HP recently released several hybrid PC/tablet devices based on Microsoft's Windows 8 operating system, but if Microsoft's Surface is any indication, those devices aren't particularly in demand, at least from home consumers.
> So, losing the hardware game that had sustained it for decades, HP has turned to its enterprise services for revenue. Unfortunately, value hasn't been found there either, and HP has announced 29,000 layoffs planned by the end of fiscal year 2014.
> In HP's third quarter 2012 earnings report, the first $8 billion write-down was blamed on Electronic Data Systems (EDS), an IT services company HP bought in 2008 for $13.9 billion. It's value didn't hold, and HP shares continued to decline. At the end of 2012, HP stock is trading at around $14, down from highs of around $50 near the beginning of 2011
> The company's fourth quarter 2012 earnings report blamed the majority of the second $8 billion write-down on Autonomy, a British knowledge management service company it acquired in 2011 for $10.2 billion. There was a twist this time, though. HP specifically laid the blame for more than $5 billion of the write-down on former Autonomy CEO Mike Lynch and other former Autonomy executives. HP brazenly accused Autonomy execs of "serious accounting improprieties, misrepresentations, and disclosure failures" prior to the acquisition.
> Almost immediately, Lynch fired back at HP, claiming that HP mismanagement was responsible for key people leaving Autonomy after the acquisition. Lynch even created a website and issued an open letter to combat the allegations. In his letter he denies any wrongdoing on Autonomy's part, saying, "I utterly reject all allegations of impropriety." He points out that world-class auditing firms, as well as HP's own accounting people, had access to Autonomy's books during the due diligence period of the acquisition.
> Lynch goes on to accuse HP of operational and financial mismanagement of Autonomy, and leveled some allegations about HP department infighting that made the company appear rather childish. He implored HP to explain, in detail, how $5 billion dollars in accounting fraud could have gone unnoticed.
> HP responded to Lynch's open letter, though it did not detail the numbers in its allegations. Instead, HP insisted that the matter would be resolved by the UK Serious Fraud Office, the U.S. Securities and Exchange Commission, and the U.S. Department of Justice. It then haughtily added that it looks forward to "hearing Dr. Lynch and other former Autonomy employees answer questions under penalty of perjury." On December 28, 2012, it was confirmed by HP that the U.S. Department of Justice is currently investigating Autonomy's accounts.
> Obviously, someone is lying. Either Lynch and other Autonomy execs are guilty of a nearly unfathomable amount of fraud and deceit, or HP ran Autonomy (and possibly EDS, for that matter) into the ground with shoddy management. Either way, HP overpaid for Autonomy, a move that shows just how desperate the company is to gain traction with its enterprise services.
> Who is telling the truth, HP or Mike Lynch?
> Let us know your thoughts in the comments.
> As HP looks to the future, it's hard to see 2013 being a turnaround year for the company. In addition to Autonomy-related lawsuits which will carry on for years, the company's restructuring efforts will continue to cut into its quarterly profits. While Apple and Samsung compete to dominate the new frontier of computer hardware, HP will be limping toward a coherent, fully-integrated business structure.
> This doesn't mean the death of HP, though. In its many decades of existence the company has, much like IBM, re-invented itself a few times, and it is likely to survive by doing so again. Exactly what value HP will be bringing to customers in the future isn't clear, but come 2015 the company should be lean enough for a solid leader take it in almost any direction.
> How can companies like HP succeed in the future?
> Let us know in the comments.
> Sean is a staff writer for WebProNews. Follow Sean on Google+: +Sean Patterson and Twitter: @St_Patt View all posts by Sean Patterson
> For ad details and prices... mailto:firstname.lastname@example.org
> Signup for free newsletters: http://www.ientry.com/page/newsletters
> --This email is a service of WebProNews--
> iEntry, Inc. 2549 Richmond Rd Second Floor Lexington, KY 40509
> To be removed from future WebProNews mailings, visit here: Unsubscribe
29 December 2012
26 December 2012
City University of New York (CUNY) researchers have developed Commons in a Box, an open source software platform that enables universities and other organizations to install and manage community sites for faculty, staff, administrators, and students. The software runs on WordPress and a plug-in for WordPress called BuddyPress. "When you build your own network, it's meant to be very customizable and very flexible," says CUNY Academic Commons' Boone B. Gorges. "But with that kind of customizability comes a certain amount of complexity that was a barrier for a lot of people." Commons is designed to make learning communities accessible for organizations that do not have the staff or funding to design their own communities. An installation manager guides users through the steps to get their site up and running, and the system will not let anyone deactivate one thing if that deactivation will break something else that is activated. "What we're offering is a way for communities to really own their own data and their own spaces and to be in control of them, which we think is a pretty important move for higher education," says CUNY Academic Commons director Matthew K. Gold.
14 December 2012
08 December 2012
From ACM TechNews:
Mobile Browsers Fail Georgia Tech Safety Test
Georgia Tech News
(12/05/12) Michael Terrazas
Georgia Tech researchers have found that mobile Web browsers are so unsafe that even cybersecurity experts cannot detect when their smartphone browsers have landed on dangerous Web sites. "We found vulnerabilities in all 10 of the mobile browsers we tested, which together account for more than 90 percent of the mobile browsers in use today in the United States," says Georgia Tech professor Patrick Traynor. The main issue is graphic icons known as secure sockets layer (SSL) or transport layer security (TLS) indicators, which alert users when their connection to the destination Web site is secure and that the Web site they see is actually the site they intended to visit. Due to the small screen associated with most mobile browsers, there is not enough room to incorporate SSL indicators as with desktop browsers. Displaying a graphical indicator that a site is secure in a Web browser's URL field is on the security guidelines recommended by the World Wide Web Consortium for browser safety. "Research has shown that mobile browser users are three times more likely to access phishing sites than users of desktop browsers," says Georgia Tech researcher Chaitrali Amrutkar.
22 November 2012
Georgia Institute of Technology professor Jacob Eisenstein and colleagues are using social networks to examine the evolution of language. The researchers collected 30 million Tweets sent from U.S. locations between December 2009 and May 2011, and built a mathematical model to capture the flow of new words between cities. The model found that new words tend to take life in cities with large African American populations before spreading more widely. Moreover, cities that are economically and ethnically similar are more likely to share new words. "Their results indicate that birds of a feather tweet together," says University of Groningen linguist John Nerbonne. The researchers are working on a more detailed analysis that could potentially reveal which cities are most influential. Eisenstein also wants to know whether neologisms spread more quickly because of Twitter and other social networks. An analysis of many types of data, including blog posts and Facebook entries, would enable the team to study whether social media is accelerating the evolution of language more generally.
17 November 2012
Blogs aren’t exactly the new kid on the digital ecosystem, but , they sure do get the job done. Here’s why I think you should seriously consider adding a blog to your personal or company’s website.
1. A blog shows visitors you know your stuff
Whatever your industry, potential customers probably want to know you understand it before they hand over their credit card number, or even call. If they can easily find some articles written by you and/or your staff that show your company’s expertise, they’re going to feel a lot more confident about spending their time or money (or both) with you.
Sounds good – maybe even a little too good to be true, right? Nevertheless, if you’re blogging regularly and well – or even kind of well – you’re bound to get some social media attention that will bring more visitors to your website. You may earn links from other sites, too. And then there are the SEO benefits inherent in any decent blog…
3. Blog = SEO, baby
The more content on your website, the better your odds of ranking highly in Google search results for the keyword phrases Google finds in your posts.
Search engines really look for each page of a website to be about one specific thing. Every blog post is a page, so if you want to rank for 50 different keyword phrases, you really need to have close to 50 different pages (such as blog posts) with each page dedicated primarily to a single keyword phrase.
You can’t hope to rank well in search engines for many keywords that matter for your business if your site has just a few pages such as Contact, About Us, Meet Our Team, and a home page that talks about all the various things you offer.
4. Blog posts are link bait
We’re sure you know that links from other sites to yours – also known as backlinks or inbound links – are an important factor for helping your site appear higher in SERPs (search engine results pages). And as you probably also know, it isn’t easy to get someone to link from their site to yours. (Unless you pay them, and we don’t recommend that.)
It’s unlikely that someone will want to link to your home page, or to a page about your product or services. But how about an interesting and insightful blog post about your industry? That might do the trick.
If you regularly put out interesting or helpful content on your blog, you will have a much easier time getting backlinks. Be a good example: When you see interesting or helpful content on someone else’s blog that’s relevant to one of your posts, do include a link to that blog. (They may return the favor!)
5. A blog is food for social media
Just like it’s hard to rank well without blog posts, it’s hard to get anyone to share a link to your website on Twitter, Facebook or Google+ if you don’t regularly blog about interesting topics.
If you can blog well enough that people start sharing links to your blog posts from their social media accounts, there’s nothing better. People like to share valuable articles with their networks, and these social media links will help your SEO. Social media mentions and links are a signal to Google and Bing that people consider a site valuable – and the search engines use that signal to help them rank websites in search results. Plus, you’ll get more direct visitors from the people who actually click those social media links.
For tips on creating blog posts more people will want to share in social media, read our post You Deserve More Blog Comments & Shares.
One more great thing about blog posts – they give you something to mention in your own social media accounts. Some days even the cleverest person runs out of new things to say.
6. Behold the CTR power of Google Authorship
Have you seen a search result like this? That’s called a rich snippet – a Google search result that isn’t just the plain ‘ol link with a little text below. When the author of a blog post is hooked up with Google Authorship, their blog posts will look like this in Google SERPs.
So what? Well, search results like this have a much higher CTR (click-through rate). MarketingTechBlog.cofares.net found that their posts with a Google+ portrait in search results were almost 5 times as likely to be clicked on in search results than links without it.
Read more about the importance of Google Authorship and how to set it up for your blog.
7. Unlike social media posts, you really do own your blog content – and you’re the one benefiting from it
For many business owners, the idea of giving up control is scary. But that’s exactly what you’re doing when you post content on a social network.
Facebook could decide that you’re violating their terms of service and delete your page. LinkedIn gets to decide which ads to place around your content. Twitter sometimes goes down and makes your tweets temporarily inaccessible. Pinterest could get bought by some company that doesn’t care about keeping the site working. The list of risks goes on.
When you put out great content on social networks, you get some benefits for sure – and so do the social networks. They get more pages in Google’s index, they can sell more advertising, and their valuation or stock prices could go up – all thanks to you and a few million others.
Now don’t get us wrong. Here at AboutUs, we definitely think social media is something businesses should be using. But we hate to see blogging overlooked when it has some serious advantages for almost any business.
8. Getting and keeping people on your site probably helps your business goals.
Do you want people to buy your product, call you about your services, fill out a lead generation form, or do something similar? If someone is reading a blog post on your website, the odds are much better that they will.
9. Blog posts have a longer shelf life
Email newsletters, tweets and Facebook posts are fleeting, and will likely be forgotten and hard to find a few days after they’re sent or posted. Your blog is a great place for more evergreen topics that will stay relevant and helpful weeks or months after they’re first published.
It’s certainly fine to blog about timely and current things, and we encourage you to do so. But make sure the important things you want people to see now and later are published on your blog.
10. Help with customer support
Do you keep getting the same questions from your customers? Why not turn each question and answer into a blog post? It will lessen the load on your customer support people, and make it easier for them to give more detailed answers quicker. It can also show prospective clients that you are serious about helping your customers.
11. Cred for author
Do you want your CEO to be seen as a thought leader in your space? Do you want to give your employees some publicity? Writing for the company blog and getting a byline can give employees (including the CEO) some street cred, and probably a warm, fuzzy feeling. Bonus points for having each author create or enhance their Google+ profile, and for assuring they have Google Authorship status. (Do NOT post everyone’s blog posts as Admin…we’re begging you.)
12. Blog comments – and commentors – are gold
One of the great features of blog posts is that they allow readers to leave a comment. This conversation is invaluable. It can give you ideas for your new product, next blog post, a better customer service process, or something else important to your business.
Because it takes time and effort to comment on a blog post, anyone who does is probably worth taking a look at. Are they a prospective client or business partner someone should follow up with? Is the commenter someone you should follow on Twitter? Is he or she a customer you should reach out to, maybe to offer support or ask for a testimonial?
Comments also make your blog post more meaty content-wise, both for your readers and for search engines. For tips on getting more blog comments, read our blog post about that.
13. Blogging shows you’re still in business, and rocking it
When I look at a company website, I often come away with a sense of how “alive” the company is. First and foremost, I can’t help but notice if the site looks like it was built in the ’90s, or if it’s modern. I also notice the presence or absence of social media links and a blog, and I pay attention to when the last blog post was published. For this reason, I only recommend adding a blog to your site if you plan to blog more than once a year.
An active blog (posts are published once a month, or more often) is a signal to Google that your site is “alive,” and that it’s worth revisiting regularly to re-index and discover new pages. If the content on your website hasn’t changed in years, Google will visit it less often. That sends a signal that the site isn’t so valuable, and can cause Google to rank the site lower than it would if you published more regularly.
Convinced? Here’s how to add a blog to your website
Adding a blog to your website can be very easy or pretty tricky, depending on how your website was set up. Talk with your technical or website person and see if they can add a blog to your website at www.YourWebsite.com/blog or blog.YourWebsite.com.
try one of the ready made Free 1-click Script Installation at any time (by eurabiahosting)
The popular scripts in the Free Scripts Installer tool are divided into several sections, including Ad Management, Blog, Calendar, Classified Ads, Content Management, Customer Support, Discussion Boards, E-Commerce, Groupware Tools, Guestbooks, Image Galleries, Web Hosting Tools, Wiki. Simply browse to the desired category, choose your script and click on the quick free script installation link.
To get a better idea about the tool's money saving power, please have a look at the table below, displaying detailed information about the average fees required by companies and freelance specialists for the installation of some of the most popular Open Source PHP scripts:
24 October 2012
Researchers at Deutsch Telekom Laboratories and Aalto University have found that customers of Amazon's EC2 cloud service do not receive the same level of performance. Amazon says it uses generic hardware, but the team used tools to examine the software that controls the groups of servers customers rent, and was able to identify the chip at the heart of each server in a group or instance of computers. Measurements taken over the course of a year revealed instances running newer, faster chips, and they were much faster than clusters that used older hardware. "In general, the variation between the fast instances and slow instances can reach 40 percent," the researchers wrote in a paper, noting that for some applications the newer clusters worked about 60 percent faster. The faster instances would enable users to reduce their server bills by up to 30 percent because the newer machines are able to crunch data faster. The team is now working on tools that can determine the performance characteristics of particular clusters and push work to more powerful groups.
23 October 2012
Click Here to View Full Article
The overwhelming popularity of U.S.-backed programs to thwart online censorship is limiting access to the tools in repressive countries because demand is creating bottlenecks and there is insufficient funding to expand capacity. The United States invests about $30 million annually in Internet freedom, with the government funding nonprofits and other developers of software that can be downloaded by users in nations where censorship is rampant. But China, Iran, and other countries are stepping up their efforts to stifle Internet freedom, prompting proponents to urge the Obama administration to ratchet up its own initiatives. Calls for the Broadcasting Board of Governors to boost Internet freedom spending are being weighed against congressional pressures for the agency to reduce its budget significantly. Officials privately say the funding issues are mired in security and political concerns. Internet freedom activists say the development of online censorship bypassing tools is partly challenged by the need to determine the amount to invest now to help users avoid detection, versus how much to invest on more complex future projects that could keep up with censorship technology.
12 October 2012
The Internet Engineering Task Force (IETF) is working on the next generation of Hypertext Transfer Protocol (HTTP), says Mark Nottingham, chairman of the IETF HTTP working group. The IETF standard SPDY protocol will serve as the basis for the updated protocol underlying the Web. HTTP version 2.0 will accommodate the evolving use of the Web as a platform for delivering applications and bandwidth-intensive, real-time multimedia content. The working group will look to reduce latency and streamline the process of how servers transmit content to browsers, but the protocol also must be backward compatible with HTTP 1.1 and remain open to be extended for future uses as well. HTTP 2.0 will primarily use TCP, but other transport mechanisms may be substituted. A proposed standard is scheduled to be submitted to the Internet Engineering Steering Group by 2014. The working group also will continue to refine HTTP 1.1. "Having a real, stable, solid, mature standard would be key to further improvements of HTTP protocol," says NGINX's Andrew Alexeev. "There's definitely the need for a modern Web protocol that is well suited for today's and tomorrow's Internet infrastructure, Web architectures, applications, server, and client software."
Tsinghua University researchers have developed TransOS, a cloud computing operating system that stores its code in the cloud, which enables a connection from a bare terminal computer. The terminal has a minimal amount of code that dynamically connects it to the Internet, after which TransOS downloads specific pieces of code that offer users options as if they were running a conventional operating system via a graphical user interface. Applications call on TransOS only as needed so that memory is not used by inactive operating system code as it is by a conventional desktop computer operating system. "The TransOS manages all the networked and virtualized hardware and software resources, including traditional OS, physical and virtualized underlying hardware resources, and enables users to select and run any service on demand," according to the Tsinghua researchers. They note TransOS also could be implemented on other devices, such as refrigerators and washing machines, and factory equipment. The researchers say it is essential that a cloud operating system architecture and relevant interface standards be established to enable TransOS to be developed for a vast range of applications.
06 October 2012
A device that controls light could lead to faster Internet downloading speeds, more affordable Internet transmission costs, and lower power consumption. Scientists and engineers at the University of Minnesota developed the microscale device, which uses force generated by light to flip a mechanical switch of light on and off at a very high speed. "This device is similar to electromechanical relays but operates completely with light," says professor Mo Li. In the device, the force of light is so strong that its mechanical property can be dominated completely by the optical effect rather than its own mechanical structure. The effect is amplified to control additional colored light signals at a much higher power level. "This is the first time that this novel optomechanical effect is used to amplify optical signals without converting them into electrical ones," Li notes. Glass optical fibers carry multiple communication channels using different colors of light assigned to different channels. In optical cables, these multi-hued channels do not interfere with each other, ensuring the efficiency of a single optical fiber to transmit more information over very long distances. The new optical relay device operates 1 million times per second, but researchers expect to improve it to several billion times per second.
Fraunhofer Institute researchers have developed an infrared module that transfers data at a rate of 1 Gbps. The multi-gigabit communication module is six times faster than a USB2 cable, 46 times faster than conventional Wi-Fi, and 1,430 times faster than a Bluetooth connection. The challenge for the researchers was to build a small infrared module with fast-working hardware and software. "We achieved this ultimately through a clever combination of different technical solutions," says Fraunhofer researcher Frank Deicke. One of the solutions is a transceiver that can simultaneously send and receive light signals. The transceiver fits in a laser diode to send light pulses and a photo detector to detect them, while the decoders that receive and translate the encoded data also are critical. "Our current infrared module has already demonstrated that infrared technology is able to go far beyond established standards," Deicke says. His participation in the Infrared Data Association reflects his dedication to improving on 1 Gbps, and Deicke has already been able to demonstrate that the transfer rate of his current model can be upgraded to 3 Gbps.
04 October 2012
The Eurocloud project aims to develop a three-dimensional (3D) microchip that can drastically cut the electricity and the installation costs of servers in cloud computing data centers. The Eurocloud project has adapted low-power microprocessor technologies, which are normally used in mobile phones, to work on a much larger scale. Initial testing shows that the new technology could reduce power needs by as much as 90 percent compared to conventional servers. The researchers say the results could make data center investment affordable to more companies, while saving the cloud computing customers of data centers billions of dollars. Eurocloud is targeting the development of server chips that cost 10 times less to buy and consume 10 times less energy when they operate compared to current cutting-edge servers. The project is focusing on virtual prototype specialization of 3D servers, characterization of cloud applications, scalable 3D architecture specifications, on-chip hierarchies, and reliability, availability, and fault tolerance. "Today's power-hungry cloud data centers are not sustainable in the long run," says the European Commission's Neelie Kroes. "The Eurocloud chip addresses the core of this energy consumption problem."
28 September 2012
24 September 2012
; Benjamin Gottlieb
The Iranian government reportedly has established a technical platform for a national online network that would exist independent of the Internet and allow for tighter information regulation. The network's development has been accelerated by cyberattacks targeting Iran's nuclear program, according to Iranian officials and outside experts. A forthcoming report from U.S. security researchers working under the aegis of the University of Pennsylvania's Center for Global Communications Studies found functional versions of the sites of Iranian government ministries, universities, and businesses on the network, as well as indications of an operational filtering capability. The researchers note the network already is "internally consistent and widely reachable." The findings have sparked concerns not just about human rights violations but also about Internet integrity, says the U.S. State Department's David Baer. "When countries section off parts of the Web, not only do their citizens suffer, everyone does," Baer says. With the infrastructure for a self-contained, Iran-only Internet in place, the government would have more power to suppress online access during periods of civil unrest. Retired U.S. National Security Agency deputy director Cedric Leighton says the construction of a national network could give government-supported hackers more capabilities to launch and repulse cyberattacks.
14 September 2012
From ACM TechNews:
Who's Trustworthy? A Robot Can Help Teach Us
New York Times
(09/10/12) Tara Parker-Pope
Researchers at Northeastern University, the Massachusetts Institute of Technology, and Cornell University say they have found specific behaviors that seem to warn the human brain that another person cannot be trusted. First, the researchers filmed students interacting with other students they had never met in a game designed to elicit untrustworthy behavior. The researchers found that cues such as leaning away from someone, crossing arms in a blocking fashion, rubbing the hands together, and touching oneself on the face were indicators of untrustworthy behavior. "The more you saw someone do this, the more intuition you had that they would be less trustworthy," says Northeastern professor David DeSteno. The researchers then set up the same experiment with students playing the game with a friendly-faced robot. Some of the robots did not perform the untrustworthy cues while others did, and the students rated them as more untrustworthy. "It makes no sense to ascribe intentions to a robot, but it appears we have certain postures and gestures that we interpret in certain ways," says Cornell professor Robert H. Frank. The study suggests there could be an evolutionary benefit to cooperation, and to being able to identify untrustworthy people.
View Full Article
08 September 2012
07 September 2012
Birmingham University researchers have found that users who frequently access BitTorrent file-sharing sites are more vulnerable to having their Internet Protocol (IP) address logged by monitors within three hours of accessing the site. Led by Birmingham's Tom Chothia, the researchers found the extent to which monitors are tracking users on file-sharing sites by monitoring activity themselves over a two-year period. Users who go to BitTorrent sites generally become aware of blocklists, which are lists of the IP addresses of known monitors. However, the researchers found that these lists include many false positives and negatives, making them almost useless in preventing monitoring. In order to determine which clients were real users and which were monitors, the researchers identified several characteristics of monitors that make them stand out. The researchers found that monitors are much busier and more active than users who generally tend to only log on when they want a certain file.
06 September 2012
A strong free software movement focused on the principled issues of software freedom — and a strong FSF in particular — will determine what freedoms the next generation of computer users enjoy. At stake is no less than the next generation's autonomy.
— Benjamin Mako Hill, writer, technologist and FSF board member
05 September 2012
28 August 2012
25 August 2012
@ElnashraNews: رئيس دائرة الاستخبارات بالحرس الثوري الإيراني: علينا مسؤولية دعم الأسد http://bit.ly/PPj0fR #Syria #Iran #fb Shared via TweetCaster
24 August 2012
19 August 2012
The U.S. National Science Foundation (NSF) recently asked a team of researchers from North Carolina State University (NCSU), the University of Massachusetts, the University of Kentucky, and the University of North Carolina at Chapel Hill to develop the key components for a networking architecture that could serve as the backbone of a new Internet that gives users more choices about which services they use. "Ultimately, this should make the Internet more flexible and efficient, and will drive innovation among service providers to cater to user needs," says NCSU professor Rudra Dutta. NSF says the new Internet architecture will hinge on users being able to make choices about which features and services they want to use. The architecture should encourage alternatives by providing different types of services, which would enable users to select the service that best meets their needs. The architecture also should enable users to reward service providers that offer superior services, which will encourage innovation. Finally, the architecture must be able to give users and service providers the ability to exchange information about the quality of the service being provided.
13 August 2012
Wall Street Journal
(08/09/12) Amir Efrati
Former Google executive Marissa Mayer has a plan to reverse Yahoo!'s waning fortunes as its new CEO, using lessons she applied at Google that include placing products and users first and developing or obtaining Web services that leverage social media, mobile devices, and other new platforms. Mayer has intimated to Yahoo! employees as part of her product-focused campaign that she wants to retool the Yahoo! Web-search and email service, whose use is in decline. Mayer also is interested in placing more Yahoo! content and advertising on other sites. This aligns with an old plan to launch a network that helps Web site publishers install new Yahoo! software on their pages to display visitor-customized articles and videos, which is known as content personalization. Moreover, the new CEO is striving to cultivate relationships with Yahoo!'s programmers through regular email discussions with software engineers who do not report to her. In addition, Mayer has emphasized the value of analyzing data on people's usage of Yahoo! Web sites and mobile apps, and the necessity of generating such user-behavior information before making decisions on whether to create a new service.
09 August 2012
07 August 2012
Une étude montre à nouveau que l'iPhone a repris du poil de la bête dans les derniers mois aux Etats-Unis. Alors qu'il ne peut compter que sur un seul modèle, qui plus est en fin de vie, Apple voit sa part de marché croitre de 1,7 point à 32,4%, selon les derniers chiffres de comScore (fin mars à fin juin). Mais c'est Android qui continue à truster le haut du panier avec une croissance certes moindre (0,6 point) mais une part de marché bien supérieure à 51,6%. Plus de la moitié des smartphones vendus aux Etats-Unis est donc animée par le robot vert. Apple et Android se partagent 84% du marché. A la troisième place, RIM poursuit sa chute avec une part qui perd 1,6 point à 10,7%. Faute de nouveaux modèles attractifs. La déception est encore plus grande pour Windows Phone qui ne parvient pas à décoller. Malgré les efforts de Nokia et des baisses de prix sur le modèle le plus récent (Lumia 900), la part de marché de l'OS de Microsoft perd 0,1 point à 3,8%. Du côté des mobiles en général, Samsung reste en tête avec une part de 25,6% (-0,4 point), suivi de LG (18,8%, -0,5 point), Apple (15,4%, +1,4 point), Motorola (11,7%, -1,1 point) et HTC (6,4%, +0,4 point).
01 August 2012
31 July 2012
29 July 2012
Los Angeles Times
(07/23/12) Michael Hiltzik
The Wall Street Journal's Gordon Crovitz recently reopened the debate about who invented the Internet, arguing that giving the U.S. government credit is an "urban legend." However, Michael Hiltzik notes that ACM president Vint Cerf, who along with Robert Kahn invented TCP/IP, the fundamental communications protocol of the Internet, on a government contract. Crovitz's main point in discrediting the U.S. Pentagon's Advanced Research Projects Agency (ARPA) with the development of the Internet is a quote from Robert Taylor, who was a top official at ARPA when the agency was developing ARPANet, the commonly agreed upon precursor to today's Internet. "The ARPANet was not an Internet," Taylor says. "An Internet is a connection between two or more computer networks." However, Hiltzik says Crovitz confuses "an internet" with "the Internet," as Taylor was citing a technical definition of "internet" in his statement. Cerf himself wrote in 2009 that ARPANet ultimately led to the Internet. Hiltzik says the fact is the Internet as we know it was born as a government project and without ARPA it may not have come into existence at all.
11 July 2012
Computerworld (07/05/12) Sharon Gaudin
Google's recently unveiled computerized eyeglasses could mark the beginning of a new computing era in which wearable computers are common. The Google Glass development effort is all about "doing brand new risky technological things that are really about making science fiction real," says Google cofounder Sergey Brin. He says the next generation of computers likely won't sit on a desk or have keyboards or monitors. Meanwhile, analysts predict that future computers will be incorporated into other items that people use, such as clothing or jewelry. "I believe that in five years we will see many different form factors and brands of wearable computers," says analyst Patrick Moorhead. He says Google's research could lead to the mainstream use of these new technologies. "We can go beyond the glasses and visualize computers in our jewelry, in our watches, and even inside our bodies," Moorhead says. Google Glass and other wearable computers also could be very useful in many workplaces, notes analyst Rob Enderle. "They could be used regularly for things like taking inventory in warehouses, and for tasks on factory floors and other places where folks need to use computers and their hands at the same time," Enderle says.
New Technologies Spread Arrival of Robots Into Our Lives
USA Today (07/05/12) Jon Swartz
Robotics experts predict that within 10 years general-purpose robots will perform household chores while consumers are at work. "We are putting robots into people's lives," says Bossa Nova Robotics co-founder Sarjoun Skaff. Companies are developing dexterous robots capable of assembling smartphones and working safely in close proximity to people. Carnegie Mellon University researchers are developing software that enables robots to determine which parts to choose and assemble properly. These new systems are more efficient tools for repetitive tasks and could greatly reduce the labor costs of consumer electronics manufacturers. In the next 10 years, groups of unmanned planes will attack enemy sites, launching missiles and avoiding detection by using sophisticated jamming technologies. The transition to automated weaponry is crucial to the military's transformation from heavy ground forces to smaller human units backed by large robotic weapons. "A robot is the interface between the information world and physical world," says SRI International's Richard Mahoney. Recent movies have humanized robotics technology, making people more comfortable with the idea of interacting with robots. "Robots will be bigger than the [personal computer] in 10 to 20 years, but it will be linked to your computing device either in the cloud or on your person," predicts Orbotix CEO Paul Berberian.
Sharing Data Links in Networks of Cars
MIT News (07/05/12) Larry Hardesty
Researchers at the Massachusetts Institute of Technology (MIT), Georgetown University, and National University of Singapore recently presented an algorithm that enables Wi-Fi-connected cars to share their Internet connections. The algorithm's approach is to aggregate data from hundreds of cars in a small amount and then upload it to the Internet. However, the difficultly lies in the fact that the layout of a network of cars is constantly changing in unpredictable ways. In general, cars that come into contact with the most other cars would aggregate the data. Using realistic assumptions, the researchers determined that for every 1,000 cars, five cars would aggregate and upload the data, says MIT graduate student Alejandro Cornejo. The researchers were able to show that the algorithm would still function well even if there were sparse connections between cars. However, their analysis also demonstrates that aggregation is not possible if the network of cars has slightly more linkages between them. "There's this paradox of connectivity where if you have these isolated clusters, which are well-connected, then we can guarantee that there will be aggregation in the clusters," Cornejo says. "But if the clusters are well connected, but they're not isolated, then we can show that it's impossible to aggregate."
New Technology Slashes Data Center Energy Consumption
CORDIS News (07/04/12)
A new energy-aware plug-in can reduce energy consumption in data centers by more than 20 percent, according to researchers with the European Union-funded Federated IT for a Sustainable Environment Impact (FIT4Green) project. Experts from industry and academia designed the technology to work on top of the current management tools used by data centers to organize the allocation of information and communications technologies resources and turn off unused equipment. The FIT4Green plug-in does not compromise the equipment's compliance with service-level agreements and quality-of-service metrics. The plug-in is designed to work in any data center type, and the savings ranged from 20 percent to as much as 50 percent during testing. Moreover, the savings in carbon dioxide emissions were on the same scale as the energy savings, while direct energy savings for information and communications technology gear also induced additional savings due to reduced cooling needs. One of the project participants, the VTT Technical Research Center, mainly focused on optimizations for supercomputing applications. The technology is now available to data centers, while the plug-in code has been released as open source software.
European Parliament Rejects Anti-Piracy Treaty
New York Times (07/05/12) Eric Pfanner
An international pact to fight digital piracy has been rejected by the European Parliament, and opponents see this as a triumph of their campaign to discourage Internet strictures. Meanwhile, groups representing media companies and other rights holders say protesters had distorted the debate to make the Anti-Counterfeiting Trade Agreement (ACTA) appear more sinister than it was, and the European rejection will hurt initiatives to curb online copyright theft. Copyright owners were hoping that ACTA would give them additional authority to prosecute rights violations, especially in developing nations marked by lax enforcement. The Parliament "has given in to pressure from anti-copyright groups despite calls from thousands of companies and workers in manufacturing and creative sectors who have called for ACTA to be signed in order that their rights as creators be protected," says European Publishers Council executive director Angela Mills Wade. However, opponents say the agreement could still supply a legitimate international framework for antipiracy strategies they detest, such as a system in France that suspends Internet access for repeat violators. The treaty also urges Internet service providers to act as copyright enforcers, which they have generally been reluctant to do.
Algorithm Identifies Top Ten Technology News Trend Setters
Technology Review (07/04/12)
Berlin Institute of Technology researchers are studying the problem of trend setting among news sites in an effort to determine which Web sites lead the news coverage and which ones follow it. The approach involves taking a snapshot of the words generated by a group of Web sites at any point in time and comparing it to the words generated by one of the Web sites at an earlier point in time, which enables them to calculate whether the content of the earlier Web site is a good predictor of future content on other sites. The researchers monitored 96 technology news sites throughout 2011, generating data on about 100,000 words. The researchers found that the top 10 trendsetters in technology news coverage were BusinessInsider, Arstechnica, Engadget, TechCrunch, Mashable, Venturebeat, Techdirt, The Register, Forbes, and Guardian. For the Berlin researchers' test, the trend setters were the ones who posted the stories first or posted so many of them that they are first often enough to appear to be trend setters. The research could lead to insights into how diseases are spread in epidemics, or determining where the first spark in a forest fire occurred.
Patent Trawler Aims to Predict Next Hot Technologies
New Scientist (07/03/12) Paul Marks
Hungarian Academy of Sciences researchers have developed a data-mining tool that automatically helps predict emerging technologies. The tool works by analyzing the frequency with which patents are cited by other patents. The researchers say that by plotting how the frequency of these citations changes over time shows that patents can be grouped into related clusters, which in turn can evolve, sometimes branching into new disciplines and other times merging with one another. The researchers, led by Peter Erdi, have developed software that charts this evolution, as well as looks into the future on the rate and type of citations to help predict whether existing technologies can lead to new areas of innovation. "Patent-citation data seems to be a gold mine of new insights into the development of technologies, since it represents the innovation process," Erdi says. In making innovation slightly more predictable, the researchers aim to remove some of the risk in futurism. "It sounds like a great antenna for what is happening in the marketplace, and the kind of discussions people are having," says Ford Motor futurist Sheryl Connelly.
Mobile, Java Developers Hard to Find: Dice
eWeek (07/03/12) Nathan Eddy
Software developers in general, as well as Java developers, mobile software developers, Microsoft .Net developers, and general security specialists are the five most difficult positions for information technology (IT) managers to fill, according to Dice.com. SAP developers, Microsoft SharePoint specialists, Web developers, active federal security clearance specialists, and network engineering professionals round out the top 10 most difficult-to-fill IT jobs. Dice.com surveyed 866 technology-focused hiring managers and recruiters, and found that the market for some skills is expanding faster than the talent pool can adapt. "Technology hiring managers largely want journeymen, not apprentices," Dice notes. "Competition is fierce when companies are all chasing the same talent, making positions hard to fill." As of July 2, 84,940 tech jobs were available, including 52,290 full-time positions, 36,157 contract positions, and 1,677 part-time positions. The New York/New Jersey metro area led the country with 8,871 positions listed, followed by the Washington, D.C./Baltimore metro area with 8,334, Silicon Valley with 5,684, Chicago with 3,900, and Los Angeles with 3,551. Dice.com notes the overall unemployment rate is about 3.5 percent, which is far lower than the U.S. jobless rate, but is unlikely to move much lower.
Doing Apps and Start-Ups While Still in High School
New York Times (07/02/12) Quentin Hardy
Palo Alto High School students recently founded the Paly Entrepreneurs Club, an extracurricular group for students who want to create start-ups and develop future technologies. The group meets weekly during the school year to discuss their ventures and ideas, explore issues such as money-raising strategies and new markets, and host guest speakers. "I want to build something that is tied to what is happening next," says Paly member Matthew Slipper. Club members have been working on several projects, such as a social network to help teenagers organize study groups, and a trading network for Bitcoin, a virtual currency. "The goal here is inspirational," says Aaron Bajor, one of the group's founders. "A great idea can hit you any time. Even if you do not have a great idea yet, if you have capabilities and passion others will want you on their team." The start-up mentality is something Paly students are born with, as many of their parents work in the tech industry. "The kids here have such an unfair advantage," says Box CEO Aaron Levie, who spoke at a recent meeting. "I told them to make friends and leverage their four years of freedom."
Soap Bubble Screen Is 'The World's Thinnest Display'
BBC News (07/02/12)
University of Tokyo researchers have developed a display that uses ultrasonic sound waves to alter soap film's properties and create either a flat or a three-dimensional image. The researchers say the display is the world's thinnest transparent screen. "We developed an ultra-thin and flexible [bidirectional reflectance distribution function] screen using the mixture of two colloidal liquids," says Tokyo researcher Yoichi Ochiai. The display varies in transparency and reflectance, which the researchers can control by hitting the bubble's membrane with ultrasonic sound waves played through speakers. The waves alter the texture of a projected image, making it look smooth or rough. Changing the wave's frequency modifies the reflective property of the screen, meaning that the transparency of the projected image also can be changed. "Our membrane screen can be controlled using ultrasonic vibrations," Ochiai says. "Membrane can change its transparency and surface states depending on the scales of ultrasonic waves."
See, Feel, Hear and Control Your Environment, Virtually
A*STAR Research (07/02/12)
A*STAR researchers have developed technologies that capture and analyze massive amounts of data to create systems that enhance urban living. The A*STAR Science and Engineering Research Council's (SERC's) Sense and Sense-abilities program focuses on pervasive sensing to tackle challenges that city planners face in developing urban environments. The technologies can be used for targeted marketing, enhancing product placement in stores, and deploying traffic management systems. For example, the A*STAR smart energy showcase demonstrates how "Smart Plugs" can be used to remotely monitor and control home appliances over the Internet. The researchers also note that A*STAR's sustainable manufacturing research promotes processes that efficiently recycle used materials, reduce a manufacturer's carbon footprint, and explore technologies that can be used for sustainable urban living. These technologies also can model urban environments to enable city planners to see what future cities might look like. For example, weather and genomic data can be used to combat diseases by predicting possible outbreaks and allowing effective intervention strategies to be rolled out faster. "A*STAR's highly engaging and exciting technology showcases provides an insight of how future cities may function in an even more intelligent and sustainable environment," says A*STAR SERC executive director Raj Thampuran.
At USC, Attacking Combat Vets' PTSD With Virtual Reality
CNet (07/02/12) Daniel Terdiman
Researchers at the University of Southern California's Institute of Creative Technologies (ICT) are developing virtual reality systems to help combat post-traumatic stress disorder (PTSD). The Emergent Leader Immersion Training Environment (ELITE) tasks young officers with learning how to handle struggling subordinates. ELITE utilizes a virtual human and a system that runs trainees through a series of scenarios in which they have to question a subordinate who has gotten into trouble that may stem from an undiagnosed case of PTSD. The program aims for the young officers to learn how to steer their charges in the right direction when this type of trouble arises. ICT researcher Skip Rizzo developed the Virtual Iraq/Afghanistan system, which is designed for use in conjunction with slow and methodical therapy. The ICT researchers also developed SimCoach, a Web-based virtual human programmed to ask questions that can help loved ones understand how to cope with someone suffering from PTSD.
University of Oxford (06/29/12) Pete Wilton
Computer scientist Stephen Wolfram recently gave a lecture at Oxford University in which he described "computational irreducibility," the idea that some computations cannot be accelerated by any shortcut and the only way to determine what is going to happen is to simulate each step. Computational irreducibility is a "junior version of undecidability," which is the idea that when you ask the question of what will ultimately happen, the answer is undecidable, according to Wolfram, who created computational tools such as Mathematica and Wolfram Alpha. He says Wolfram Alpha aims to "collect as much knowledge as possible and make it computable," an approach that could be applied to find out which theories about a certain structure or system was more powerful. For example, he notes that a recent pilot study focusing on continued fractions is already showing that the process of organizing theories in a way that is systematically computable is leading to new advances.