Error message

  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2040 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTime, boolean given in format_date() (line 2050 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2040 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTime, boolean given in format_date() (line 2050 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2040 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTime, boolean given in format_date() (line 2050 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2040 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTime, boolean given in format_date() (line 2050 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2040 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTime, boolean given in format_date() (line 2050 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_timezone_set() expects parameter 1 to be DateTime, boolean given in format_date() (line 2040 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).
  • Warning: date_format() expects parameter 1 to be DateTime, boolean given in format_date() (line 2050 of /home1/moitozoc/drupals/drupal-7.33/includes/common.inc).

sysadmin

Helping users fill out online forms

Google Webmaster Central Blog - Tue, 2015-03-17 09:26

A lot of websites rely on forms for important goals completion, such as completing a transaction on a shopping site or registering on a news site. For many users, online forms mean repeatedly typing common information like their names, emails, phone numbers or addresses, on different sites across the web. In addition to being tedious, this task is also error-prone, which can lead many users to abandon the flow entirely. In a world where users browse the internet using their mobile devices more than their laptops or desktops, having forms that are easy and quick to fill out is crucial! Three years ago, we announced the support for a new “autocomplete” attribute in Chrome, to make form-filling faster, easier and smarter. Now, Chrome fully supports the "autocomplete" attribute for form fields according to the current WHATWG HTML Standard. This allows webmasters and web developers to label input element fields with common data types, such as ‘name’ or ‘street-address’, without changing the user interface or the backend. Numerous webmasters have increased the rate of form completions on their sites by marking up their forms for auto-completion.

For example, marking up an email address field on a form to allow auto-completion would look like this (with a full sample form available):

<input type="email" name="customerEmail" autocomplete="email"/>

Making websites friendly and easy to browse for users on mobile devices is very important. We hope to see many forms marked up with the “autocomplete” attribute in the future. For more information, you can check out our specifications about Label and name inputs in Web Fundamentals. And as usual, if you have any questions, please post in our Webmasters Help Forums.

Posted by Mathieu Perreault, Chrome Software Engineer, and Zineb Ait Bahajji, Webmaster Trends Analyst

Categories: sysadmin

An update on doorway pages

Google Webmaster Central Blog - Mon, 2015-03-16 12:42
Google’s Search Quality team is continually working on ways in which to minimize the impact of webspam on users. This includes doorway pages.

We have a long-standing view that doorway pages that created solely for search engines can harm the quality of the user’s search experience.

For example, searchers might get a list of results that all go to the same site. So if a user clicks on one result, doesn't like it, and then tries the next result in the search results page and is taken to that same site that they didn't like, that's a really frustrating experience.
Over time, we've seen sites try to maximize their “search footprint” without adding clear, unique value. These doorway campaigns manifest themselves as pages on a site, as a number of domains, or a combination thereof. To improve the quality of search results for our users, we’ll soon launch a ranking adjustment to better address these types of pages. Sites with large and well-established doorway campaigns might see a broad impact from this change.
To help webmasters better understand our guidelines, we've added clarifying examples and freshened our definition of doorway pages in our Quality Guidelines.
Here are questions to ask of pages that could be seen as doorway pages:
  • Is the purpose to optimize for search engines and funnel visitors into the actual usable or relevant portion of your site, or are they an integral part of your site’s user experience?
  • Are the pages intended to rank on generic terms yet the content presented on the page is very specific?
  • Do the pages duplicate useful aggregations of items (locations, products, etc.) that already exist on the site for the purpose of capturing more search traffic?
  • Are these pages made solely for drawing affiliate traffic and sending users along without creating unique value in content or functionality?
  • Do these pages exist as an “island?” Are they difficult or impossible to navigate to from other parts of your site? Are links to such pages from other pages within the site or network of sites created just for search engines?
If you have questions or feedback about doorway pages, please visit our webmaster help forum.
Posted by Brian White, Google Webspam Team
Categories: sysadmin

Deprecation of the old Webmaster Tools API

Google Webmaster Central Blog - Thu, 2015-03-12 17:31

Last fall we announced the new Webmaster Tools API, which helps you to automate a number of important aspects using code. With the pending shutdown of ClientLogin, we're going to turn down the old Webmaster Tools API on April 20, 2015.  

If you're still using the old API, getting started with the new one is fairly easy. The new API covers everything from the old version except for messages and keywords. We have examples in Python, Java, as well as OACurl (for command-line fans & quick testing).  Additionally, there's the Site Verification API to add sites programmatically to your account. The Python search query data download will continue to be available for the moment, and replaced by an API in the upcoming quarters.

As always, should you have any questions, feel free to comment here, or post in our Webmaster Help Forum.


Posted by John Mueller, fan of command lines & APIs, Google Zuerich
Categories: sysadmin

Unblocking resources with Webmaster Tools

Google Webmaster Central Blog - Wed, 2015-03-11 07:48

Webmasters often use linked images, CSS, and JavaScript files in web pages to make them pretty and functional. If these resources are blocked from crawling, then Googlebot can't use them when it renders those pages for search. Google Webmaster Tools now includes a Blocked Resources Report to help you find and resolve these kinds of issues.

This report starts with the names of the hosts from which your site is using blocked resources such as JavaScript, CSS, and images. Clicking on the rows gives you the list of blocked resources and then the pages that embed them, guiding you through the steps to diagnose and resolve how we're able to crawl and index the page's content.

An update to Fetch and Render shows how these blocked resources matter. When you request a URL be fetched and rendered, it now shows screenshots rendered both as Googlebot and as a typical user. This makes it easier to recognize the issues that significantly influence why your pages are seen differently by Googlebot.

Webmaster Tools attempts to show you only the hosts that you might have influence over, so at the moment, we won't show hosts that are used by many different sites  (such as popular analytics services). Because it can be time-consuming (usually not for technical reasons!) to update all robots.txt files, we recommend starting with the resources that make the most important visual difference when blocked. Our Help Center article has more information on the steps involved.

We hope this new feature makes it easier for you to spot and then unblock resources used by your website! Should you have any questions, feel free to drop by our webmaster help forums.

Posted by John Mueller, Webmaster Trends Analyst, Google Switzerland
Categories: sysadmin

Easier website development with Web Components and JSON-LD

Google Webmaster Central Blog - Mon, 2015-03-09 06:57

JSON-LD is a JSON-based data format that can be used to implement structured data describing content on your site to Google and other search engines. For example, if you have a list of events, cafes, people or more, you can include this data in your pages in a structured way using the schema.org vocabulary embedded in webpages as a JSON-LD snippet. The structured data helps Google understand your pages better and highlight your content in search features, such events in the Knowledge Graph and rich snippets.

Web Components are a nascent set of technologies to define custom, reusable user interface widgets and their behavior. Any web developer can build a Web Component. You start by defining a template for a distinct part of the user interface, which you import into the pages on which you want to use the Web Component. A Custom Element is used to define the behavior of the Web Component. Because you’re bundling the display and logic for part of the user interface into the Web Component, you can share and reuse the bundle on other pages and with other developers, thus simplifying web development.

JSON-LD and Web Components work really well together. The Custom Element functions as the presentation layer and the JSON-LD functions as the data layer that the custom element and search engines consume. This means you can build custom elements for any schema.org type, such as schema.org/Event and schema.org/LocalBusiness.

Your architecture would then look like this. Your structured data is stored in your database, for example, the store locations in your chain. This data is embedded into your webpage as a JSON-LD snippet, which means it’s available to be consumed by the Custom Element to display to a human visitor and for Googlebot to retrieve for Google indexing.

To learn more and get started with your own custom elements, please see:

Posted by Ewa Gasperowicz, Developer Programs Engineer, Mano Marks, Developer Advocate, Pierre Far, Webmaster Trends Analyst
Categories: sysadmin

Safe Browsing and Google Analytics: Keeping More Users Safe, Together

Google Webmaster Central Blog - Tue, 2015-03-03 09:20

The following was originally posted on the Google Online Security Blog.

If you run a web site, you may already be familiar with Google Webmaster Tools and how it lets you know if Safe Browsing finds something problematic on your site. For example, we’ll notify you if your site is delivering malware, which is usually a sign that it’s been hacked. We’re extending our Safe Browsing protections to automatically display notifications to all Google Analytics users via familiar Google Analytics Notifications.

Google Safe Browsing has been protecting people across the Internet for over eight years and we're always looking for ways to extend that protection even further. Notifications like these help webmasters like you act quickly to respond to any issues. Fast response helps keep your site—and your visitors—safe.


Posted by: Stephan Somogyi, Product Manager, Security and Privacy
Categories: sysadmin

Finding more mobile-friendly search results

Google Webmaster Central Blog - Thu, 2015-02-26 13:23

Webmaster level: all

When it comes to search on mobile devices, users should get the most relevant and timely results, no matter if the information lives on mobile-friendly web pages or apps. As more people use mobile devices to access the internet, our algorithms have to adapt to these usage patterns. In the past, we’ve made updates to ensure a site is configured properly and viewable on modern devices. We’ve made it easier for users to find mobile-friendly web pages and we’ve introduced App Indexing to surface useful content from apps. Today, we’re announcing two important changes to help users discover more mobile-friendly content:

1. More mobile-friendly websites in search results

Starting April 21, we will be expanding our use of mobile-friendliness as a ranking signal. This change will affect mobile searches in all languages worldwide and will have a significant impact in our search results. Consequently, users will find it easier to get relevant, high quality search results that are optimized for their devices.

To get help with making a mobile-friendly site, check out our guide to mobile-friendly sites. If you’re a webmaster, you can get ready for this change by using the following tools to see how Googlebot views your pages:

  • If you want to test a few pages, you can use the Mobile-Friendly Test.
  • If you have a site, you can use your Webmaster Tools account to get a full list of mobile usability issues across your site using the Mobile Usability Report.
2. More relevant app content in search results

Starting today, we will begin to use information from indexed apps as a factor in ranking for signed-in users who have the app installed. As a result, we may now surface content from indexed apps more prominently in search. To find out how to implement App Indexing, which allows us to surface this information in search results, have a look at our step-by-step guide on the developer site.

If you have questions about either mobile-friendly websites or app indexing, we’re always happy to chat in our Webmaster Help Forum.


Posted by Takaki Makino, Chaesang Jung, and Doantam Phan
Categories: sysadmin

Case Studies: Fixing Hacked Sites

Google Webmaster Central Blog - Wed, 2015-02-18 15:40

Webmaster Level: All

Every day, thousands of websites get hacked. Hacked sites can harm users by serving malicious software, collecting personal information, or redirecting them to sites they didn't intend to visit. Webmasters want to fix hacked sites quickly, but unfortunately recovering from a hack can be a complicated process.

We're trying to make the process of recovering from a hack easier for webmasters with features like Security Issues, Help for Hacked Sites, and a section of our forum just for hacked sites. Recently we talked to two webmasters with hacked sites to learn more about how they were able to fix their sites. We're sharing their stories with the hope that they might provide ideas to other webmasters who have been victims of hacking. We're also using these stories and other feedback for improving our documentation for hacked sites to make the process easier for everyone going forward.

Case Study #1: Restaurant website with multiple hack-injected scripts

A restaurant website using Wordpress received a message from Google in their Webmaster Tools account, alerting them that their site had been altered by hackers. To protect Google users, the website was labelled as hacked in Google's search results. The webmaster of the site, Sam, looked at the source code and noticed many unfamiliar links on the site with pharmaceuticals terms such as "viagra" and "cialis." She also noticed many pages where the meta description tags (in the HTML) had added content such as "buy valtrex in florida." There were also hidden div tags (also in the HTML) of many pages that linked to many sites. None of these links were added by Sam.

Sam removed all of the hacked content she found and filed a reconsideration request. The request was rejected but in the message she received from Google, she was advised to check for any unfamiliar scripts in the any PHP files (or any other server files), as well as changes to the .htaccess file. These files are likely to have scripts added by the hackers that modify the site. These scripts typically only show the hacked content to search engines, while hiding the content from a normal user. Sam checked out all of the .php files and compared them to the clean copies she had in her backup. She found new content added to her footer.php, index.php, and functions.php. When she replaced those files with the clean backups, she could no longer find any hacked content on her site. When she filed another reconsideration request, she got a response from Google notifying her that her site was free from hacked content!

Even though Sam had cleaned up the hacked content on her site, she knew that she would need to continue to secure her site against future attacks. She followed the steps below to keep her site safe in the future:

  • Keep the CMS (content management system like WordPress, Joomla, Drupal, etc) up to date with the most current version. Make sure plugins are up to date as well.
  • Make sure the account used to access the administrative features of the CMS uses a difficult and unique password.
  • If the CMS supports it, enable 2-step verification for login. (This might also be called two factor authentication or two step authentication.) This is recommended for the account being used for password recovery as well. Most email providers, like Google, Microsoft, Yahoo all support this!
  • Make sure the plugins and themes installed are from a reputable source - pirated plugins or themes can often contain code that makes it even easier for hackers to get in!
Case Study #2: Professional website with lots of hard to find hacked pages

A small business owner named Maria who also manages her own website received a message in her Webmaster Tools that her site was hacked. The message provided an example of a page added by hackers: http://example.com/where-to-buy-cialis-over-the-counter/. She talked to her hosting provider who looked at the source code on the homepage but could not find any pharmaceutical keywords. When the hosting provider visited http://example.com/where-to-buy-cialis-over-the-counter/, it returned an error page. Maria also bought a malware scanning service but the service was not able to find any malicious content on her site.

Maria then went to Webmaster Tools and used the Fetch as Google tool on the example URL Google had provided (http://example.com/where-to-buy-cialis-over-the-counter/) which returned no content. Confused, she filed a reconsideration request and received a rejection message which advised her to do two things:

  1. Verify the non-www version of her site as hackers often try to hide content in folders that may be overlooked by the webmaster.

    While it may seem like http://example.com and http://www.example.com are the same site, Google actually treats these as different sites. http://example.com is referred to as the "root domain" while http://www.example.com is called the subdomain. Maria had http://www.example.com verified but not http://example.com verified which is important because the pages added by hackers were non-www pages like http://example.com/where-to-buy-cialis-over-the-counter/. Once she verified http://example.com she was able to successfully see the hacked content on the provided URL with the Fetch as Google tool in Webmaster Tools.

  2. Check her .htaccess file for new rules.

    Maria talked to her hosting provider who showed her how to access her .htaccess file. She noticed right away that her .htaccess file had some strange content that she had not added:

    <IfModule mod_rewrite.c>
    RewriteEngine On
    RewriteCond %{HTTP_USER_AGENT} (google|yahoo|msn|aol|bing) [OR]
    RewriteCond %{HTTP_REFERER} (google|yahoo|msn|aol|bing)
    RewriteRule ^([^/]*)/$ /main.php?p=$1 [L]
    </IfModule>

    The mod_rewrite rule you see above was inserted by the hacker and redirects anyone coming from certain search engines, as well as search engine crawlers, to main.php, which generates all of the hacked content. It's also possible that these rules can redirect users accessing the site on a mobile device. On the same day, she also saw that a recent malware scan found suspicious content on the main.php file. One top of that, she also noticed an unknown user in the ftp users area of her website development software.

She removed the main.php file, the .htaccess file, and removed the unknown user from her FTP users area and her site was no longer hacked!

Steps to prevent getting hacked in the future
  • Avoid using FTP when transferring files to your servers. FTP does not encrypt any traffic, including passwords. Instead, use SFTP, which will encrypt everything, including your password, as a protection against eavesdroppers examining network traffic.
  • Check the permissions on sensitive files like .htaccess. Your hosting provider may be able to assist you if you need help. The .htaccess file can be used to improve and protect your site, but it can also be used for malicious hacks if they are able to gain access to it.
  • Be vigilant and look for new and unfamiliar users in your administrative panel and any other place where there may be users that can modify your site.

We hope your site never gets hacked, but if it does, we have many resources for hacked webmasters on our Help for Hacked Sites page. If you need more help or would like to share your own tips, you can post in our Webmaster Help Forum. If you do post to the forum or submit a reconsideration request for your site, please include #NoHacked.

Posted by Julian Prentice and Yuan Niu, Search Quality Team
Categories: sysadmin

Crawling and indexing of locale-adaptive pages

Google Webmaster Central Blog - Wed, 2015-01-28 06:02

Webmaster level: advanced

Locale-adaptive pages change their content to reflect the user's language or perceived geographic location. Since, by default, Googlebot requests pages without setting an Accept-Language HTTP request header and uses IP addresses that appear to be located in the USA, not all content variants of locale-adaptive pages may be indexed completely.

Today we’re introducing new locale-aware crawl configurations for Googlebot for pages that we detect may adapt the content they serve based on the request's language and perceived location. These are:

  • Geo-distributed crawling where Googlebot would start to use IP addresses that appear to be coming from outside the USA, in addition to the current IP addresses that appear to be from the USA that Googlebot currently uses.
  • Language-dependent crawling where Googlebot would start to crawl with an Accept-Language HTTP header in the request.

As these new crawling configurations are enabled automatically for pages we detect to be locale-adaptive, you may notice changes in how we crawl and show your site in Google search results without you altering your CMS or server settings.

Note that these new configurations do not alter our recommendation to use separate URLs with rel=alternate hreflang annotations for each locale. We continue to support and recommend using separate URLs as they are still the best way for users to interact and share your content, and also to maximize indexing and better ranking of all variants of your content.

As always, if you have any questions or feedback, please tell us in the internationalization Webmaster Help Forum.

Posted by Qin Yin, Software Engineer Search Infrastructure, and Pierre Far, Webmaster Trends Analyst
Categories: sysadmin

Upcoming Events In The Knowledge Graph

Google Webmaster Central Blog - Thu, 2015-01-15 15:06
Webmaster level: all

Last year, we launched a new way for musical artists to list their upcoming events on Google: schema.org markup on their official websites. Now we’re expanding this program in four ways:

1. Official Ticket LinksFor artists: if you mark up ticketing links along with the events on your official website, we’ll show an expanded answer card for your events in Google search, including the on-sale date, availability, and a direct link to your preferred ticketing site.
As before, you may write the event markup directly into your site’s HTML, or simply install an event widget that builds in the markup for you automatically—like Bandsintown, BandPage, GigPress, ReverbNation or Songkick.

2. Delegated Event ListingsWhat if you can’t add markup or an event widget to your official website—for example, if your website doesn’t list your events at all? Now you can use delegation markup to tell us to source your events from a page of your choice on another website. Just add the following markup to your home page, making sure to customize the three red values:
<script type="application/ld+json">
{"@context" : "http://schema.org",
"@type" : "MusicGroup",
"name" : "Your Band or Performer Name",
"url" : "http://your-official-website.com",
"event" : "http://other-event-site.com/your-event-listing-page/"
}
</script>The marked-up events found on the other event site's page will then be eligible for Google events features. Examples of sites you can point to in the “event” field include bandpage.com, bandsintown.com, songkick.com, and ticketmaster.com.

3. Comedian EventsHey funny people! We want your performances to show up on Google, too. Just add ComedyEvent markup to your official website. Or, if another site like laughstub.com has your complete event listings, use delegation markup on your home page to point us their way.

4. Venue EventsLast but definitely not least: we’re starting to show venue event listings in Google Search. Concert venues, theaters, libraries, fairgrounds, and so on: make your upcoming events eligible for display across Google by adding Event markup to your official website.
As with artist events, you have a choice of writing the event markup directly into your site’s HTML, or using a widget or plugin that builds in the markup for you. Also, if all your events are ticketed by a primary ticketer whose website provides markup, you don’t have to do anything! Google will read the ticketer’s markup and apply it toward your venue’s event listings.

For example, venues ticketed by Ticketmaster, including its international sites and TicketWeb, will automatically be covered. The same goes for venues that list events with Ticketfly, AXS, LaughStub, Wantickets, Holdmyticket, ShowClix, Stranger Tickets, Ticket Alternative, Digitick, See Tickets, Tix, Fnac Spectacles, Ticketland.ru, iTickets, MIDWESTIX, Ticketleap, or Instantseats. All of these have already implemented ticketer events markup.

Please see our Developer Site for full documentation of these features, including a video tutorial on how to write and test event markup. Then add the markup, help new fans discover your events, and play to a packed house!

Posted by Justin Boyan, Product Manager, Google Search
Categories: sysadmin

New Structured Data Testing Tool, documentation, and more

Google Webmaster Central Blog - Thu, 2015-01-15 15:03
Webmaster level: all

Structured data markup helps your content get discovered in search results and across Google properties. We’re excited to share several updates to help you author and publish markup on your website:

Structured Data Testing ToolThe new Structured Data Testing Tool better reflects how Google interprets a web page’s structured data markup.It provides the following features:
  • Validation for all Google features powered by structured data
  • Support for markup in the JSON-LD syntax, including in dynamic HTML pages
  • Clean display of the structured data items on your page
  • Syntax highlighting of markup problems right in your HTML source code

New documentation and simpler policyWe've clarified our documentation for the vocabulary supported in structured data based on webmasters' feedback. The new documentation explains the markup you need to add to enable different search features for your content, along with code examples in the supported syntaxes. We'll be retiring the old documentation soon.

We've also simplified and clarified our policies on using structured data. If you believe that another site is abusing Google's rich snippets quality guidelines, please let us know using the rich snippets spam report form.

Expanded support for JSON-LDWe've extended our support for schema.org vocabulary in JSON-LD syntax to new use cases: company logos and contacts, social profile links, events in the Knowledge Graph, the sitelinks search box, and event rich snippets. We're working on expanding support to additional markup-powered features in the future.

As always, we welcome your feedback and questions; please post in our Webmaster Help forums.

Posted by Pierre Far, Webmaster Trends Analyst, Tatsiana Sakhar, Search Quality Analyst, Zach Clifford, Software Engineer
Categories: sysadmin

Introducing the Google News Publisher Center

Google Webmaster Central Blog - Tue, 2014-12-16 08:57
(Cross-posted on the Google News Blog)

Webmaster level: All

UPDATE: Great News -- The Publisher Center is now available in all countries where Google News has an edition.

If you're a news publisher, your website has probably evolved and changed over time -- just like your stories. But in the past, when you made changes to the structure of your site, we might not have discovered your new content. That meant a lost opportunity for your readers, and for you. Unless you regularly checked Webmaster Tools, you might not even have realized that your new content wasn’t showing up in Google News. To prevent this from happening, we are letting you make changes to our record of your news site using the just launched Google News Publisher Center.

With the Publisher Center, your potential readers can be more informed about the articles they’re clicking on and you benefit from better discovery and classification of your news content. After verifying ownership of your site using Google Webmaster Tools, you can use the Publisher Center to directly make the following changes:

  • Update your news site details, including changing your site name and labeling your publication with any relevant source labels (e.g., “Blog”, “Satire” or “Opinion”)
  • Update your section URLs when you change your site structure (e.g., when you add a new section such as http://example.com/2014commonwealthgames or http://example.com/elections2014)
  • Label your sections with a specific topic (e.g., “Technology” or “Politics”)

Whenever you make changes to your site, we’d recommend also checking our record of it in the Publisher Center and updating it if necessary.

Try it out, or learn more about how to get started.

At the moment the tool is only available to publishers in the U.S. but we plan to introduce it in other countries soon and add more features.  In the meantime, we’d love to hear from you about what works well and what doesn’t. Ultimately, our goal is to make this a platform where news publishers and Google News can work together to provide readers with the best, most diverse news on the web.

Posted by Eric Weigle, Software Engineer
Categories: sysadmin

Google Public DNS and Location-Sensitive DNS Responses

Google Webmaster Central Blog - Mon, 2014-12-15 11:00

Webmaster level: advanced

Recently the Google Public DNS team, in collaboration with Akamai, reached an important milestone: Google Public DNS now propagates client location information to Akamai nameservers. This effort significantly improves the accuracy of approximately 30% of the location-sensitive DNS responses returned by Google Public DNS. In other words, client requests to Akamai hosted content can be routed to closer servers with lower latency and greater data transfer throughput. Overall, Google Public DNS resolvers serve 400 billion responses per day and more than 50% of them are location-sensitive.

DNS is often used by Content Distribution Networks (CDNs) such as Akamai to achieve location-based load balancing by constructing responses based on clients’ IP addresses. However, CDNs usually see the DNS resolvers’ IP address instead of the actual clients’ and are therefore forced to assume that the resolvers are close to the clients. Unfortunately, the assumption is not always true. Many resolvers, especially those open to the Internet at large, are not deployed at every single local network.

To solve this issue, a group of DNS and content providers, including Google, proposed an approach to allow resolvers to forward the client’s subnet to CDN nameservers in an extension field in the DNS request. The subnet is a portion of the client’s IP address, truncated to preserve privacy. The approach is officially named edns-client-subnet or ECS.

This solution requires that both resolvers and CDNs adopt the new DNS extension. Google Public DNS resolvers automatically probe to discover ECS-aware nameservers and have observed the footprint of ECS support from CDNs expanding steadily over the past years. By now, more than 4000 nameservers from approximately 300 content providers support ECS. The Google-Akamai collaboration marks a significant milestone in our ongoing efforts to ensure DNS contributes to keeping the Internet fast. We encourage more CDNs to join us by supporting the ECS option.

For more information about Google Public DNS, please visit our website. For CDN operators, please also visit “A Faster Internet” for more technical details.


Posted by Yunhong Gu, Tech Lead, Google Public DNS
Categories: sysadmin

The four steps to appiness

Google Webmaster Central Blog - Tue, 2014-12-09 13:22

Webmaster Level: intermediate to advanced

App deep links are the new kid on the block in organic search, and they’re picking up speed faster than you can say “schema.org ViewAction”! For signed-in users, 15% of Google searches on Android now return deep links to apps through App Indexing. And over just the past quarter, we've seen the number of clicks on app deep links jump by 10x.

We’ve gotten a lot of feedback from developers and seen a lot of implementations gone right and others that were good learning experiences since we opened up App Indexing back in June. We’d like to share with you four key steps to monitor app performance and drive user engagement:

1. Give your app developer access to Webmaster Tools

App indexing is a team effort between you (as a webmaster) and your app development team. We show information in Webmaster Tools that is key for your app developers to do their job well. Here’s what’s available right now:

  • Errors in indexed pages within apps
  • Weekly clicks and impressions from app deep link via Google search
  • Stats on your sitemap (if that’s how you implemented the app deep links)
...and we plan to add a lot more in the coming months!

We’ve noticed that very few developers have access to Webmaster Tools. So if you want your app development team to get all of the information they need to fix app-related issues, it’s essential for them to have access to Webmaster Tools.

Any verified site owner can add a new user. Pick restricted or full permissions, depending on the level of access you’d like to give:

2. Understand how your app is doing in search results

How are users engaging with your app from search results? We’ve introduced two new ways for you to track performance for your app deep links:

  • We now send a weekly clicks and impressions update to the Message center in your Webmaster Tools account.
  • You can now track how much traffic app deep links drive to your app using referrer information - specifically, the referrer extra in the ACTION_VIEW intent. We're working to integrate this information with Google Analytics for even easier access. Learn how to track referrer information on our Developer site.

3. Make sure key app resources can be crawled

Blocked resources are one of the top reasons for the “content mismatch” errors you see in Webmaster Tools’ Crawl Errors report. We need access to all the resources necessary to render your app page. This allows us to assess whether your associated web page has the same content as your app page.

To help you find and fix these issues, we now show you the specific resources we can’t access that are critical for rendering your app page. If you see a content mismatch error for your app, look out for the list of blocked resources in “Step 5” of the details dialog:

4. Watch out for Android App errors

To help you identify errors when indexing your app, we’ll send you messages for all app errors we detect, and will also display most of them in the “Android apps” tab of the Crawl errors report.

In addition to the currently available “Content mismatch” and “Intent URI not supported” error alerts, we’re introducing three new error types:

  • APK not found: we can’t find the package corresponding to the app.
  • No first-click free: the link to your app does not lead directly to the content, but requires login to access.
  • Back button violation: after following the link to your app, the back button did not return to search results.

In our experience, the majority of errors are usually caused by a general setting in your app (e.g. a blocked resource, or a region picker that pops up when the user tries to open the app from search). Taking care of that generally resolves it for all involved URIs.

Good luck in the pursuit of appiness! As always, if you have questions, feel free to drop by our Webmaster help forum.

Posted by Mariya Moeva, Webmaster Trends Analyst
Categories: sysadmin

Pages