The BBC can carry on investing in internet TV outfit YouView, the BBC Trust has ruled. The oversight panel "endorsed" the broadcaster's continued involvement in the video on-demand organisation and has published a report supporting its decision.…
electronicsforu: Matlab's easy to use interface, its power, and flexibility definitely make it a well deserved popular and useful software
By the end of the year there will be more mobile phone connections than people on the planet, although they mostly belong to the richer part of the planet – a whopping half the world’s population doesn’t have connectivity.…
The Fetch as Google feature in Webmaster Tools provides webmasters with the results of Googlebot attempting to fetch their pages. The server headers and HTML shown are useful to diagnose technical problems and hacking side-effects, but sometimes make double-checking the response hard: Help! What do all of these codes mean? Is this really the same page as I see it in my browser? Where shall we have lunch? We can't help with that last one, but for the rest, we've recently expanded this tool to also show how Googlebot would be able to render the page.
You can find the Fetch as Google feature in the Crawl section of Google Webmaster Tools. After submitting a URL with "Fetch and render," wait for it to be processed (this might take a moment for some pages). Once it's ready, just click on the response row to see the results.
Handling resources blocked by robots.txtGooglebot follows the robots.txt directives for all files that it fetches. If you are disallowing crawling of some of these files (or if they are embedded from a third-party server that's disallowing Googlebot's crawling of them), we won't be able to show them to you in the rendered view. Similarly, if the server fails to respond or returns errors, then we won't be able to use those either (you can find similar issues in the Crawl Errors section of Webmaster Tools). If we run across either of these issues, we'll show them below the preview image.
We recommend making sure Googlebot can access any embedded resource that meaningfully contributes to your site's visible content, or to its layout. That will make Fetch as Google easier for you to use, and will make it possible for Googlebot to find and index that content as well. Some types of content – such as social media buttons, fonts or website-analytics scripts – tend not to meaningfully contribute to the visible content or layout, and can be left disallowed from crawling. For more information, please see our previous blog post on how Google is working to understand the web better.
We hope this update makes it easier for you to diagnose these kinds of issues, and to discover content that's accidentally blocked from crawling. If you have any comments or questions, let us know here or drop by in the webmaster help forum.
Posted by Shimi Salant, Webmaster Tools team
Read more of this story at Slashdot.
The Australian government's plan to scrap its Renewable Energy Target (RET), pitched as a way to cut power bills down under, will drive up electricity prices. That's according to an analysis by Bloomberg New Energy Finance (NEF).…
Sometimes things don't go perfectly during rendering, which may negatively impact search results for your site. Here are a few potential issues, and – where possible, – how you can help prevent them from occurring:
- If your web server is unable to handle the volume of crawl requests for resources, it may have a negative impact on our capability to render your pages. If you’d like to ensure that your pages can be rendered by Google, make sure your servers are able to handle crawl requests for resources.
To make things easier to debug, we're currently working on a tool for helping webmasters better understand how Google renders their site. We look forward to making it to available for you in the coming days in Webmaster Tools.
If you have any questions, please feel free to visit our help forum.
Posted by Erik Hendriks and Michael Xu, Software Engineers, and Kazushi Nagayama, Webmaster Trends Analyst
GamingOnLinux: OpenXcom is still one of my favourite open source engines ever.
Read more of this story at Slashdot.
Storagebod So it seems IBM has finally decided to stop reselling NetApp filers, and focus on its own products instead. I’m also waiting for the inevitable announcement that it will stop selling the rebadged Engenio products, as there is s fairly large crossover there.…
Release Notes: ddr_hash was enhanced. A bug where sha512/sha384 could have overflown a buffer was fixed. sha1 support has been added. Most importantly, there are now options to conveniently check and store checksums in xattrs and md5sum/sha256sum/... style files. A ddr_null plugin was added.
Release Tags: Stable; minor bugfixes; minor features
Tags: Recovery Tools
More than a thousand point-of-sale, grocery management and accounting systems worldwide have been compromised by a new strain of malware, results of a March 2014 probe have revealed.…