Latest SEO News Updates
Wednesday, 13 July 2016
Saturday, 9 April 2016
Security is a top priority for Google. We invest a lot in making sure that our services use industry-leading security, like strong HTTPS encryption by default. That means that people using Search, Gmail and Google Drive, for example, automatically have a secure connection to Google.
Beyond our own stuff, we’re also working to make the Internet safer more broadly. A big part of that is making sure that websites people access from Google are secure. For instance, we have created resources to help webmasters prevent and fix security breaches on their sites.
We’ve also seen more and more webmasters adopting HTTPS (also known as HTTP over TLS, or Transport Layer Security), on their website, which is encouraging.
For these reasons, over the past few months we’ve been running tests taking into account whether sites use secure, encrypted connections as a signal in our search ranking algorithms. We've seen positive results, so we're starting to use HTTPS as a ranking signal. For now it's only a very lightweight signal — affecting fewer than 1% of global queries, and carrying less weight than other signals such as high-quality content — while we give webmasters time to switch to HTTPS. But over time, we may decide to strengthen it, because we’d like to encourage all website owners to switch from HTTP to HTTPS to keep everyone safe on the web.
SEO advantages of switching to HTTPS
It is clear that HTTPS offers security, so it is definitely the choice to put you in Google’s good graces. There are also some additional SEO benefits for you to consider.
The obvious one. As stated, Google has confirmed the slight ranking boost of HTTPS sites. Like most ranking signals, it is very hard to isolate on its own, but this is still something to keep in mind. On the plus side, the value of switching to HTTPS is very likely to increase over time.
When traffic passes to an HTTPS site, the secure referral information is preserved. This is unlike what happens when traffic passes through an HTTP site, and it is stripped away and looks as though it is “direct.”
Security and privacy
HTTPS adds security for your SEO goals and website in several ways:
It verifies that the website is the one the server it is supposed to be talking to.
It prevents tampering by third parties.
It makes your site more secure for visitors.
It encrypts all communication, including URLs, which protects things like browsing history and credit card numbers.
What type of SSL certificate works best?
Companies offer a myriad and confusing array of SSL certificates. The two primary ones to pay attention to are:
Standard Validation SSL – Standard level of validation. Get Free SSL Here https://www.comodo.co.in/ssl-certificates/.
Extended Validation SSL – Offers the highest level of validation and often costs.
Related Source: Low Cost SSL Certificate Provider in India.
Saturday, 30 January 2016
Google last update core algorithm needs to be consistent enough to run by itself without much worry that it won't work right.
Panda is part of Google’s core ranking algorithm has gotten SEOs buzzing about what exactly being part of the core algorithm means. Now, Google has shared some more information about that, saying that the algorithm is consistent enough to not require many changes in the future and can run with less hand-holding.
Watch following Andrey Lipattsev, a search quality senior strategist at Google, said in regard to what core means: Andrey Lipattsev explains that Gary meant here that Panda is now part of the core ranking algorithm, and it means they do not need to know how it runs any more. It’s been tested, it works, and it can now run by itself without much worry. Ammon Johns, in the hangout, then said, “Once they forgot how it works, it is core?” To which Andrey Lipattsev replied, “That is exactly right.” Thanks for share this source: http://searchengineland.com/google-explains-what-it-means-to-be-part-of-the-core-algorithm-240681 Click to read to More Latest SEO News 2016
Monday, 18 January 2016
The golden rules that I believe are necessary for SEO work are:
1. Web Design – Producing a visually attractive page.
2. HTML coding – Developing search engine-friendly coding that sits behind the web design.
3. Copy writing – Producing the actual readable text on the page.
4. Marketing – What are the actual searches that are being used, what key words actually get more business for your company?
5. An eye for detail — Even the smallest errors can stop spiderbots visiting your site.
6. Patience — There is a time lag on any change you make, waiting is a virtue.
7. IT skills — An appreciation of how search engine programs and the algorithms actually work.
1. Many website designers produce more and more eye-catching designs with animations and clever features hoping to entice the people onto their sites. This is the first big mistake; using designs like these may actually decrease your chances of a high Google rating. Yes, that’s right; all that money you have paid for the website design could be wasted because no one will ever find your site.
The reason for this is that before worrying about bringing people to your site, you need to get the spiderbots to like your site. Spiderbots are pieces of software used by the search engine companies to crawl the Internet looking at all the websites, and then having reviewed the sites, they use complex algorithms to rank the sites. Some of the complex techniques used by Web designers cannot be trawled by spiderbots. They come to your site, look at the HTML code and exit stage right, without even bothering to rank your site — meaning you will not be found on any meaningful search.
I am amazed how many times I look at websites and I immediately know they are a waste of money. The trouble is that both the Web designers and the company that paid the money really do not want to know this. In fact, I have stopped playing the messenger of bad news (too many shootings!); I now work round the problem.
Optimizing a website to be Google-friendly is often a compromise between a visually attractive site and an easy-to-find site.
2. The second skill is that of optimizing the actual HTML code to be spiderbot-friendly. I put this as different to the web design because you really do need to be “down and dirty” in the code rather than using an editor like FrontPage, which is OK for website design. This skill takes lots of time and experience to develop, and just when you think you have cracked it, the search engine companies change the algorithms used to calculate how high your site will appear in the search results.
This is no place for even the most enthusiastic amateur. Results need to be constantly monitored, pieces of code added or removed, and a check kept on what the competition is doing. Many people who design their own website feel they will get searched because it looks good, and totally miss out on this step. Without a strong technical understanding of how spiderbots work, you will always struggle to get your company on the first results page in Google. We actually run seven test domains that are testing different theories with different search engines. Remember that different search engines use different criteria and algorithms to rank your site — one size does not fit all.
3. Thirdly, I suggested that copy writing is a skill in its own right. This is the writing of the actual text that people coming to your site will read. The Googlebot and other spiderbots love text – but only when written well in properly constructed English. Some people try to stuff their site with keywords, while others put white writing on white space (so spiderbots can see it but humans cannot).
Spiderbots are very sophisticated and not only will not fall for these tricks, they may actively penalize your site – in Google terms, this is sandboxing. Google takes new sites and “naughty” sites and effectively sin-bins them for three to six months, you can still be found, but n0t until results page 14 – which is not very useful. As well as good English, the spiderbots are also reading the HTML code, so the copywriter also needs an appreciation of the interplay between the two. My recommendation for anyone copy writing their own site is to write normal, well-constructed English sentences that can be read by machine and human alike.
4. The fourth skill is marketing. After all, this is what we are doing – marketing you site and hence company and products/services on the Web. The key here is to set the site up to be accessible to the searches that will provide most business to you. I have seen many sites that can be found as you key in the company name. So the marketing skill requires knowledge of a company’s business, what they are really trying to sell and an understanding of what actual searches may provide dividends.
5. The next rule is an eye for detail. Even a simple change to a Web page can create an error that means the spiderbots will not crawl your site. Recently, I put a link to a page that didn’t have www. at the front of the address. The link still worked but the spiders stopped crawling, and it took my partner to find the error. We have recently invested in a very sophisticated html validator that picks up errors that other validators just fail to see. These errors do not stop the pages displaying correctly to the human eye, but cause massive problems with spiderbots. Almost all the code that I look at on the Web using this validator flags major errors, even from SEO companies.
6. The sixth rule is patience — it really is a virtue. Some people seem to want to make daily changes and then think they can track the web page ranking results the next day. Unfortunately, it can take a week for absolutely correct changes to take effect, in which time you have made six other changes. Add to this Google’s reticence to allow new sites straight on to its listings by adding a waiting factor of, maybe, three months for new sites, and you have a totally uncontrollable situation. We say to all our clients that a piece of SEO work should be looked at like a marketing campaign that runs for six months, since it is only after that time that a true judgment of the effectiveness of the work can be made.
7. The final and seventh skill is an appreciation of how search engines and algorithms work, for this where both IT and math experience is useful. People who have programmed at a detailed systems level have a natural feeling for how spiderbots will read a page, what they will search for, what tables they will set up, what weightings they may give to different elements. All of this builds a picture of the database that will be created and how it will be accessed when a search is undertaken. Unfortunately, this skill is the most difficult one to learn because it relies on many years experience of systems programming.
Written by - Richa Verma
Tuesday, 12 January 2016
Google has pushed a core ranking algorithm update over Jan 8th & 9th 2016.
Did you notice ranking changes with your websites?
Google have confirmed on Twitter that what webmasters were seeing over the weekend was not the Penguin update we are expecting, but rather a core ranking algorithm update.
Google Panda Algorithm is Now Part Of Google’s Core Ranking Signals
Panda is now baked 2016 in as one of Google's core ranking algorithm. Panda is an algorithm that’s applied to sites overall and has become one of our core ranking signals. It measures the quality of a site, which you can read more about in our guidelines. Panda allows Google to take quality into account and adjust ranking accordingly.
Thursday, 24 December 2015
What do you expect to change the most about optimizing for Google in 2016?
Mobile has been a major focal point of Google for much longer, but in 2015 it was as big a focus as ever. Early in the year, Google announced two significant ranking factors – app indexing and mobile-friendliness – both aimed at improving the mobile experience for users and getting them the content they want/need in the best way possible.
This will (unsurprisingly) continue to be a major focus on Google’s heading into 2016.
In a recent webmaster hangout on Google+, Google webmaster trends analyst John Mueller spoke a little about what to expect for SEO in the coming year (via Barry Schwartz).
The relevant portion of the video begins at about 26 minutes in, but you’re probably only going to get more by watching the entire video.
Mueller answers a question about general SEO tips for 2016 (as transcribed by Schwartz):
Oh man… I don’t have any magical SEO tips for next year. I can’t tell you about that high ranking meta tag that we’ve been working on [sarcasm].
But past that, of course, high quality content is something I’d focus on. I see lots and lots of SEO blogs talk about user experience, which I think is a great thing to focus on as well. Because that essentially kind of focuses on what we are trying to look at as well. We want to rank content that is useful for them and if your content is really useful for them, then we want to rank it.
We’ve covered mobile-friendliness a great deal throughout the year, so if this is something you’re still struggling with as Mueller implies, I’d encourage you to read back through the content found here.
AMP of course refers to Accelerated Mobile Pages, which is a new open source project and basically Google’s answer to Facebook’s Instant Articles, which is being supported by a number of other internet players including Yahoo, Twitter, LinkedIn, Pinterest, WordPress.com, ChartBeat, Parse.ly, and Adobe Analytics.
You can read more about this here, but Google recently said it will begin sending search traffic to AMP pages beginning in late February. So that’s one major change you can expect in 2016 (and early 2016 at that).
Another big SEO change coming in early 2016 is Google’s next Penguin update which is supposed to update in real time moving forward.
Reference Source: http://www.webpronews.com/what-google-says-about-seo-in-2016-2015-12/
Wednesday, 8 July 2015
The movement to make the Internet more secure through HTTPS brings several useful advancements for webmasters. In addition to security improvements, HTTPS promises future technological advances and potential SEO benefits for marketers.
HTTPS in search results is rising. Recent MozCast data from Dr. Pete shows nearly 20% of first page Google results are now HTTPS.
Sadly, HTTPS also has its downsides.
Marketers run into their first challenge when they switch regular HTTP sites over to HTTPS. Technically challenging, the switch typically involves routing your site through a series of 301 redirects. Historically, these types of redirects are associated with a loss of link equity (thought to be around 15%) which can lead to a loss in rankings. This can offset any SEO advantage that Google claims switching.
Ross Hudgens perfectly summed it up in this tweet:
Many SEOs have anecdotally shared stories of HTTPS sites performing well in Google search results (and our soon-to-be-published Ranking Factors data seems to support this.) However, the short term effect of a large migration can be hard to take. When Moz recently switched to HTTPS to provide better security to our logged-in users, we saw an 8-9% dip in our organic search traffic.
Problem number two is the subject of this post. It involves the loss of referral data. Typically, when one site sends traffic to another, information is sent that identifies the originating site as the source of traffic. This invaluable data allows people to see where their traffic is coming from, and helps spread the flow of information across the web.
SEOs have long used referrer data for a number of beneficial purposes. Oftentimes, people will link back or check out the site sending traffic when they see the referrer in their analytics data. Spammers know this works, as evidenced by the recent increase in referrer spam:
This process stops when traffic flows from an HTTPS site to a non-secure HTTP site. In this case, no referrer data is sent. Webmasters can't know where their traffic is coming from.
Here's how referral data to my personal site looked when Moz switched to HTTPS. I lost all visibility into where my traffic came from.
Enter the meta referrer tag
While we can't solve the ranking challenges imposed by switching a site to HTTPS, we can solve the loss of referral data, and it's actually super-simple.
Almost completely unknown to most marketers, the relatively new meta referrer tag (it's actually been around for a few years) was designed to help out in these situations.
Better yet, the tag allows you to control how your referrer information is passed.
The meta referrer tag works with most browsers to pass referrer information in a manner defined by the user. Traffic remains encrypted and all the benefits of using HTTPS remain in place, but now you can pass referrer data to all websites, even those that use HTTP.
How to use the meta referrer tagWhat follows are extremely simplified instructions for using the meta referrer tag. For more in-depth understanding, we highly recommend referring to the W3C working draft of the spec.
The meta referrer tag is placed in the section of your HTML, and references one of five states, which control how browsers send referrer information from your site. The five states are:
None: Never pass referral data
None When Downgrade: Sends referrer information to secure HTTPS sites, but not insecure HTTP sites
Origin Only: Sends the scheme, host, and port (basically, the subdomain) stripped of the full URL as a referrer, i.e. https://moz.com/example.html would simply send https://moz.com
Origin When Cross-Origin: Sends the full URL as the referrer when the target has the same scheme, host, and port (i.e. subdomain) regardless if it's HTTP or HTTPS, while sending origin-only referral information to external sites. (note: There is a typo in the official spec. Future versions should be "origin-when-cross-origin")
Unsafe URL: Always passes the URL string as a referrer. Note if you have any sensitive information contained in your URL, this isn't the safest option. By default, URL fragments, username, and password are automatically stripped out.
The meta referrer tag in action
By clicking the link below, you can get a sense of how the meta referrer tag works.
We've set the meta referrer tag for Moz to "origin", which means when we link out to another site, we pass our scheme, host, and port. The end result is you see http://moz.com as the referrer, stripped of the full URL path (/meta-referrer-tag).
My personal site typically receives several visits per day from Moz. Here's what my analytics data looked like before and after we implemented the meta referrer tag.
For simplicity and security, most sites may want to implement the "origin" state, but there are drawbacks.
One negative side effect was that as soon as we implemented the meta referrer tag, our AdRoll analytics, which we use for retargeting, stopped working. It turns out that AdRoll uses our referrer information for analytics, but the meta referrer tag "origin" state meant that the only URL they ever saw reported was https://moz.com.
Conclusion We love the meta referrer tag because it keeps information flowing on the Internet. It's the way the web is supposed to work!
It helps marketers and webmasters see exactly where their traffic is coming from. It encourages engagement, communication, and even linking, which can lead to improvements in SEO. Original Source Posted by: https://moz.com/blog/meta-referrer-tag