Feed aggregator

Carry On YouView Regardless, BBC Trust tells the BBC

The Register - Tue, 2014-05-27 09:06
I'm flabbergasted. My flabber has never been so gasted

The BBC can carry on investing in internet TV outfit YouView, the BBC Trust has ruled. The oversight panel "endorsed" the broadcaster's continued involvement in the video on-demand organisation and has published a report supporting its decision.…

Categories: news

4 Free and Open Source Alternatives of Matlab

Linux Today - Tue, 2014-05-27 09:00

 electronicsforu: Matlab's easy to use interface, its power, and flexibility definitely make it a well deserved popular and useful software

Categories: linux, news, open source

Suicide bomber kills 19 in Baghdad Shi'ite mosque

Reuters: Technology - Tue, 2014-05-27 08:44
BAGHDAD (Reuters) - A suicide bomber blew himself up inside a Shi'ite mosque in central Baghdad on Tuesday, killing at least 19 people, security and medical sources said.






Categories: news

World shares hover near record high

Reuters: Technology - Tue, 2014-05-27 08:40
LONDON (Reuters) - World shares were just shy of a record and the euro was being squeezed on Tuesday, on expectations the European Central Bank will extend more than five years of easy monetary policy when it meets next week.
Categories: news

GSMA: There are more mobile connections than PEOPLE... but WHO'S HOGGING them all?

The Register - Tue, 2014-05-27 08:37
We need to track the users

By the end of the year there will be more mobile phone connections than people on the planet, although they mostly belong to the richer part of the planet – a whopping half the world’s population doesn’t have connectivity.…

Categories: news

Rendering pages with Fetch as Google

Google Webmaster Central Blog - Tue, 2014-05-27 08:28
Webmaster level: all

The Fetch as Google feature in Webmaster Tools provides webmasters with the results of Googlebot attempting to fetch their pages. The server headers and HTML shown are useful to diagnose technical problems and hacking side-effects, but sometimes make double-checking the response hard: Help! What do all of these codes mean? Is this really the same page as I see it in my browser? Where shall we have lunch? We can't help with that last one, but for the rest, we've recently expanded this tool to also show how Googlebot would be able to render the page.
Viewing the rendered pageIn order to render the page, Googlebot will try to find all the external files involved, and fetch them as well. Those files frequently include images, CSS and JavaScript files, as well as other files that might be indirectly embedded through the CSS or JavaScript. These are then used to render a preview image that shows Googlebot's view of the page.

You can find the Fetch as Google feature in the Crawl section of Google Webmaster Tools. After submitting a URL with "Fetch and render," wait for it to be processed (this might take a moment for some pages). Once it's ready, just click on the response row to see the results.


Handling resources blocked by robots.txtGooglebot follows the robots.txt directives for all files that it fetches. If you are disallowing crawling of some of these files (or if they are embedded from a third-party server that's disallowing Googlebot's crawling of them), we won't be able to show them to you in the rendered view. Similarly, if the server fails to respond or returns errors, then we won't be able to use those either (you can find similar issues in the Crawl Errors section of Webmaster Tools). If we run across either of these issues, we'll show them below the preview image.

We recommend making sure Googlebot can access any embedded resource that meaningfully contributes to your site's visible content, or to its layout. That will make Fetch as Google easier for you to use, and will make it possible for Googlebot to find and index that content as well. Some types of content – such as social media buttons, fonts or website-analytics scripts – tend not to meaningfully contribute to the visible content or layout, and can be left disallowed from crawling. For more information, please see our previous blog post on how Google is working to understand the web better.

We hope this update makes it easier for you to diagnose these kinds of issues, and to discover content that's accidentally blocked from crawling. If you have any comments or questions, let us know here or drop by in the webmaster help forum.

Posted by Shimi Salant, Webmaster Tools team
Categories: sysadmin

The Flaw Lurking In Every Deep Neural Net

Slashdot - Tue, 2014-05-27 08:27
mikejuk (1801200) writes "A recent paper, 'Intriguing properties of neural networks,' by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus, a team that includes authors from Google's deep learning research project, outlines two pieces of news about the way neural networks behave that run counter to what we believed — and one of them is frankly astonishing. Every deep neural network has 'blind spots' in the sense that there are inputs that are very close to correctly classified examples that are misclassified. To quote the paper: 'For all the networks we studied, for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network.' To be clear, the adversarial examples looked to a human like the original, but the network misclassified them. You can have two photos that look not only like a cat but the same cat, indeed the same photo, to a human, but the machine gets one right and the other wrong. What is even more shocking is that the adversarial examples seem to have some sort of universality. That is a large fraction were misclassified by different network architectures trained on the same data and by networks trained on a different data set. You might be thinking 'so what if a cat photo that is clearly a photo a cat is recognized as a dog?' If you change the situation just a little and ask what does it matter if a self-driving car that uses a deep neural network misclassifies a view of a pedestrian standing in front of the car as a clear road? There is also the philosophical question raised by these blind spots. If a deep neural network is biologically inspired we can ask the question, does the same result apply to biological networks? Put more bluntly, 'Does the human brain have similar built-in errors?' If it doesn't, how is it so different from the neural networks that are trying to mimic it?"

Read more of this story at Slashdot.








Categories: news

Ditching renewables will punch Aussies in the wallet – Bloomberg

The Register - Tue, 2014-05-27 08:17
Spiking billions of future investment not a great idea, we're told

The Australian government's plan to scrap its Renewable Energy Target (RET), pitched as a way to cut power bills down under, will drive up electricity prices. That's according to an analysis by Bloomberg New Energy Finance (NEF).…

Categories: news

Vietnam, China trade barbs after Vietnamese fishing boat sinks

Reuters: Technology - Tue, 2014-05-27 08:12
HANOI/BEIJING (Reuters) - Vietnam and China traded accusations on Tuesday over the sinking of a Vietnamese fishing boat not far from where China has parked an oil rig in the disputed South China Sea, as tensions fester between the two countries over the giant drilling platform.






Categories: news

Red Hat: 2014:0561-01: curl: Moderate Advisory

LinuxSecurity.com - Tue, 2014-05-27 08:12
LinuxSecurity.com: Updated curl packages that fix two security issues and several bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having Moderate [More...]
Categories: linux, news, security

U.S. durable goods orders rise on defense, but business spending plans weak

Reuters: Technology - Tue, 2014-05-27 08:10
WASHINGTON (Reuters) - Orders for long-lasting U.S. manufactured goods unexpectedly rose in April, but a drop in a measure of business capital spending plans could temper expectations for a sharp rebound in economic growth this quarter.
Categories: news

Understanding web pages better

Google Webmaster Central Blog - Tue, 2014-05-27 08:00

In 1998 when our servers were running in Susan Wojcicki’s garage, we didn't really have to worry about JavaScript or CSS. They weren't used much, or, JavaScript was used to make page elements... blink! A lot has changed since then. The web is full of rich, dynamic, amazing websites that make heavy use of JavaScript. Today, we'll talk about our capability to render richer websites — meaning we see your content more like modern Web browsers, include the external resources, execute JavaScript and apply CSS.

Traditionally, we were only looking at the raw textual content that we’d get in the HTTP response body and didn't really interpret what a typical browser running JavaScript would see. When pages that have valuable content rendered by JavaScript started showing up, we weren’t able to let searchers know about it, which is a sad outcome for both searchers and webmasters.

In order to solve this problem, we decided to try to understand pages by executing JavaScript. It’s hard to do that at the scale of the current web, but we decided that it’s worth it. We have been gradually improving how we do this for some time. In the past few months, our indexing system has been rendering a substantial number of web pages more like an average user’s browser with JavaScript turned on.

Sometimes things don't go perfectly during rendering, which may negatively impact search results for your site. Here are a few potential issues, and – where possible, – how you can help prevent them from occurring:
  • If resources like JavaScript or CSS in separate files are blocked (say, with robots.txt) so that Googlebot can’t retrieve them, our indexing systems won’t be able to see your site like an average user. We recommend allowing Googlebot to retrieve JavaScript and CSS so that  your content can be indexed better. This is especially important for mobile websites, where external resources like CSS and JavaScript help our algorithms understand that the pages are optimized for mobile.
  • If your web server is unable to handle the volume of crawl requests for resources, it may have a negative impact on our capability to render your pages. If you’d like to ensure that your pages can be rendered by Google, make sure your servers are able to handle crawl requests for resources.
  • It's always a good idea to have your site degrade gracefully. This will help users enjoy your content even if their browser doesn't have compatible JavaScript implementations. It will also help visitors with JavaScript disabled or off, as well as search engines that can't execute JavaScript yet.
  • Sometimes the JavaScript may be too complex or arcane for us to execute, in which case we can’t render the page fully and accurately.
  • Some JavaScript removes content from the page rather than adding, which prevents us from indexing the content.

To make things easier to debug, we're currently working on a tool for helping webmasters better understand how Google renders their site. We look forward to making it to available for you in the coming days in Webmaster Tools.

If you have any questions, please feel free to visit our help forum.

Posted by Erik Hendriks and Michael Xu, Software Engineers, and Kazushi Nagayama, Webmaster Trends Analyst
Categories: sysadmin

Thai army gets down to work on economy, stifles dissent

Reuters: Technology - Tue, 2014-05-27 08:00
BANGKOK (Reuters) - Thailand's military rulers settled down to work at their Bangkok headquarters on Tuesday, firmly in charge with royal endorsement but facing small protests that the security forces have handled with restraint.






Categories: news

Red Hat: 2014:0560-01: libvirt: Moderate Advisory

LinuxSecurity.com - Tue, 2014-05-27 08:00
LinuxSecurity.com: Updated libvirt packages that fix one security issue and three bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having Moderate [More...]
Categories: linux, news, security

OpenXcom, The Open Source Engine For The Original X-COM Stable Release To Come Soon

Linux Today - Tue, 2014-05-27 08:00

 GamingOnLinux: OpenXcom is still one of my favourite open source engines ever.

Categories: linux, news, open source

Red Hat: 2014:0557-01: kernel-rt: Important Advisory

LinuxSecurity.com - Tue, 2014-05-27 07:54
LinuxSecurity.com: Updated kernel-rt packages that fix multiple security issues are now available for Red Hat Enterprise MRG 2.5. The Red Hat Security Response Team has rated this update as having [More...]
Categories: linux, news, security

Australian iPhone and iPad Users Waylaid By Ransomware

Slashdot - Tue, 2014-05-27 07:46
DavidGilbert99 (2607235) writes "Multiple iPhone/iPad/Mac users in Australia are reporting their devices being remotely locked and a ransom demand being made to get them unlocked again. However, unlike PC ransomware, the vector of attack here seems to be Apple's iCloud service with the attacker getting to a database of username/password credentials associated with the accounts. It is unclear if the database was one of Apple's or the hacker is simply using the fact that people reuse the same password for multiple accounts and is using data stolen from another source. Apple is yet to respond, but there has already been one report of the issue affecting a user in the UK."

Read more of this story at Slashdot.








Categories: news

PEAK NAS? Peak NAS. I reckon we've reached it

The Register - Tue, 2014-05-27 07:40
Our man on the data centre floor thinks the filer arena will plateau

Storagebod So it seems IBM has finally decided to stop reselling NetApp filers, and focus on its own products instead. I’m also waiting for the inevitable announcement that it will stop selling the rebadged Engenio products, as there is s fairly large crossover there.…

Categories: news

dd_rescue 1.45

Freshmeat - Tue, 2014-05-27 07:30
dd_rescue copies data from one file or block device to another. It is intended for error recovery, so by default, it doesn't abort on errors and doesn't truncate the output file. It uses large block sizes to quicken the copying, but falls back to small blocks upon encountering errors. It produces reports that allow you to keep track of bad blocks. dd_rescue features a number of optimizations, such as sparse block detection, preallocation, and Linux zerocopy (splice). It supports data protection by (multi-pass) overwriting of files or partitions with good and fast random numbers.

Release Notes: ddr_hash was enhanced. A bug where sha512/sha384 could have overflown a buffer was fixed. sha1 support has been added. Most importantly, there are now options to conveniently check and store checksums in xattrs and md5sum/sha256sum/... style files. A ddr_null plugin was added.

Release Tags: Stable; minor bugfixes; minor features

Tags: Recovery Tools

Licenses: GPL

Categories: open source

Swiping your card at local greengrocers? Miscreants will swipe YOU in a minute

The Register - Tue, 2014-05-27 07:14
Keylogging botnet Nemanja is coming to a small biz near you

More than a thousand point-of-sale, grocery management and accounting systems worldwide have been compromised by a new strain of malware, results of a March 2014 probe have revealed.…

Categories: news