Wednesday, December 7, 2011

What is Code to Text Ratio?

Code to Text Ratio

The Code to Text Ratio calculates the percentage of actual text in a web page. Content ratio tool extracts the text from paragraphs and the anchor text from HTML code and calculates the Code to Text Ratio based on this information.

Why Code to Text Ratio Important for SEO?

Code to text ratio is used by search engines to check the relevancy of a web page. Higher Code to Text Ratio is good for ranking purpose.

An Example to calculate code to text ratio percentage of a web page is given below:

Web Page Size :
17481 Bytes = 17 KB

Code Size :
13692 Bytes = 13 KB

Text Size :
3789 Bytes = 4 KB

Code to Text Ratio = Text Size/Web Page Size*100

Code to Text Ratio = 3789/17481*100 = 21.67%

Wednesday, June 8, 2011

New Google Product 2011

Google the most popular search engine worldwide is now launching some new Google products and services in 2011. Our SEO expert explains some of the New Google Product and Services 2011 that are being talked by everyone include:

http://www.ecommercecircle.com/files/Google-Product-Search.jpg

  • Gmail Call Phone: Now Gmail users can easily make free calls to landlines and cell phones from within the Gmail interface.

  • Gmail Priority Inbox: Google made its Gmail service smarter with priority inbox. This tool is very useful as it can prioritize user’s important mails or messages to help them to sort mails and pare the inbox clutter.

  • Google TV:Google being one of the top search engines can easily search Web for anything. Furthermore, now you can also do TV channel surfing, peep into worldwide TV channels, and watch what world is watching.

  • Google News Revamp: Google News has now been given an important change by Google in an effort to provide the users with more personalization options and relevancy. With the help of this new feature, the users will now be able to custom tailor the news feeds, that they are most interested in, whether it be business, sci-tech, world or any other subject. This means, the users will be able to find new stories of their interests and these stories will not be subjected to their respective days of publishing they will be based on their personal interests

  • Chrome OS: Google Chrome OS is an open source, lightweight operating system that will initially be targeted at netbooks. Speed, simplicity and security are the key aspects of Google Chrome OS. Chrome OS is designed to be fast and lightweight, to start up and get onto the web in a few seconds. Google Chrome OS is created for people who spend most of their time on the web, and is being designed to power computers ranging from small netbooks to full-size desktop systems.

  • Nexus One: The Nexus One by Taiwan's HTC Corporation is a Smart phone by Google. It uses the Android open source mobile operating system. Its features include the ability to transcribe voice to text, an additional microphone for dynamic noise suppression, and voice guided turn-by-turn navigation to drivers.

  • Google Social Search: Google Social Search helps you discover relevant content from your social connections, a set of your online friends and contacts. Content from your friends and acquaintances is sometimes more relevant and meaningful to you than content from any random person. Your Social Search experience is personal and the highlighted content that you see is unique to you and your social connections. Some of the content you may see include Websites, blogs, and other content that's shared by or created by your friends, Images that are shared by your social connections, Relevant articles from your Google Reader subscriptions, Profiles of people you know beneath results for social sites, Web content that has been recommended or shared by others.

  • Google Buzz:Google Buzz is a new way to start conversations about the things you find interesting. It's built right into Gmail, so you don't have to peck out an entirely new set of friends from scratch — it just works. Buzz brings this network to the surface by automatically setting you up to follow the people you email and chat with the most. Buzz integrates tightly with your existing Gmail inbox, so you're sure to see the stuff that matters most as it happens in real time.

Friday, March 18, 2011

New Google Update March 2011

New: Google Lets You Block Any Site From Search Results

Google announced you can now hide or block certain sites from showing up in the Google search results.

When you do a search in Google, the search results will show a new link near the “Cache” link when you click a result and then return to Google. The link that Google adds to the search results reads “Block all example.com results.” Clicking on that will allow you to block the site from showing up in the Google results.



If you are not logged in, Google will immediately block the result and confirm they did so. But if you want the site to remain blocked on future searches, you have to login and confirm the block request.

At the bottom of the search results it will show you that there are blocked sites. It will let you show the blocked results or manage your blocked sites.

Here is the confirmation page:



Here is the manage block page:

You can access this page under your “Search Settings.”

Google said:

We’re adding this feature because we believe giving you control over the results you find will provide an even more personalized and enjoyable experience on Google. In addition, while we’re not currently using the domains people block as a signal in ranking, we’ll look at the data and see whether it would be useful as we continue to evaluate and improve our search results in the future. The new feature is rolling out today and tomorrow on Google.com in English for people using Chrome 9+, IE8+ and Firefox 3.5+, and we’ll be expanding to new regions, languages and browsers soon. We hope you find it useful, and we’ll be listening closely to your suggestions.

Thursday, February 17, 2011

SEO Technical / Tactics Interview Questions

Every SEO prefers certain tactics over others, but familiarity with many could indicate a deeper understanding of the industry. And while every SEO doesn't need to have a web developer background, having such skills can help set someone apart from the crowd.

Technical / Tactics

1. Give me a description of your general SEO experience.
2. Can you write HTML code by hand?
3. Could you briefly explain the PageRank algorithm?
4. How you created any SEO tools either from scratch or pieced together from others?
5. What do you think of PageRank?
6. What do you think of using XML sitemaps?
7. What are your thoughts on the direction of Web 2.0 technologies with regards to SEO?
8. What SEO tools do you regularly use?
9. Under what circumstances would you look to exclude pages from search engines using robots.txt vs meta robots tag?
10. What areas do you think are currently the most important in organically ranking a site?
11. Do you have experience in copywriting and can you provide some writing samples?
12. Have you ever had something you've written reach the front-page of Digg? Sphinn? Or be Stumbled?
13. Explain to me what META tags matter in today's world.
14. Explain various steps that you would take to optimize a website?
15. If the company whose site you've been working for has decided to move all of its content to a new domain, what steps would you take?
16. Rate from 1 to 10, tell me the most important "on page" elements
17. Review the code of past clients/company websites where SEO was performed.
18. What do you think about link buying?
19. What is Latent Semantic Analysis (LSI Indexing)?
20. What is Phrase Based Indexing and Retrieval and what roles does it play?
21. What is the difference between SEO and SEM?
22. What kind of strategies do you normally implement for back links?
23. What role does social media play in an SEO strategy?
24. What things wouldn't you to do increase rankings because the risk of penalty is too high?
25. What's the difference between PageRank and Toolbar PageRank?
26. Why might you want to use nofollow on an internal link?

I hope it will help you & you all will like it, Soon I will come with SEO Analysis Interview Questions, Keep reading..

Best Link Builder, Best SEO, Top Link Builder

Tuesday, February 15, 2011

Google update cracks down on duplicate content

Those within the SEO industry have known for some time that duplicate content is a big no-no with the search engines, especially Google. However, despite the lectures and best practices slamming the use of duplicate content, scraper sites and plagiarised web copy, sites that steal content still managed to slip through the cracks in Google’s algorithm and rank within its pages.

This caused much ire with Google, and with many website owners and SEO experts who played by Google’s rules. Google (as ever) has been busy working behind the scenes to stamp out this most heinous threat to the purity of its SERPs. In the last big Google Algorithm update in January Google has worked tirelessly to deal with the issue of duplicate content and content scraper sites, ensuring its index is free of spam and thievery.

In Google’s January update, the search engine has put in place more measures to drive spam content sites from its pages, rewarding those sites with unique, quality content.

Matt Cutts, Google’s head of anti-spam, commented about the update:

“[I mentioned] that ‘we’re evaluating multiple changes that should help drive spam levels even lower, including one change that primarily affects sites that copy others’ content and sites with low levels of original content.”

“That change was approved at our weekly quality launch meeting last Thursday and launched earlier this week.”

“This was a pretty targeted launch: slightly over 2% of queries change in some way, but less than half a percent of search results change enough that someone might really notice.”

“The net effect is that searchers are more likely to see the sites that wrote the original content rather than a site that scraped or copied the original site’s content.”

Google is talking tough, but what exactly will be affected by this change, and how can you avoid being adversely affected by the ‘Google Content Update’?

Firstly, if your website utilises content from other sites, meaning that you aggregate content in some way to your site (such as with the use of an RSS feed) then your site can be seen as a scraper site, or content farm. None of your content is your own, therefore is of new use to Google in terms of offering it to its users.

Secondly, if your website uses content that is very low in quality, and very high in quantity, in a bid to increase the size of your website and the number of keywords that your site ranks for, you could also be in trouble with this new update. Some websites use content in this way, and receive their content from offshore companies using non-native speakers because of the lower cost implications – regardless of the quality.

The way to protect yourself, and your website, against the Content Update (snappy eh?) is to ensure that any content added to your website is of a high quality and relevance to your site, and your industry, and that it is unique.

For Google to trust a website enough to rank it highly in the SERPs it needs to have unique, relevant content, and to feature strong backlinks. All of this can be achieved through ethical SEO.

Best Link Builder, Best SEO, Top Link Builder


Thursday, February 3, 2011

Google accuses Bing of 'copying' its search results

Google accused of criminal intent over StreetView data





Google is "almost certain" to face prosecution for collecting data from unsecured wi-fi networks, according to Privacy International (PI).

The search giant has been under scrutiny for collecting wi-fi data as part of its StreetView project.

Google has released an independent audit of the rogue code, which it has claimed was included in the StreetView software by mistake.

But PI is convinced the audit proves "criminal intent".

"The independent audit of the Google system shows that the system used for the wi-fi collection intentionally separated out unencrypted content (payload data) of communications and systematically wrote this data to hard drives. This is equivalent to placing a hard tap and a digital recorder onto a phone wire without consent or authorisation," said PI in a statement.

This would put Google at odds with the interception laws of the 30 countries that the system was used in, it added.

Scotland Yard

"The Germans are almost certain to prosecute. Because there was intent, they have no choice but to prosecute," said Simon Davies, head of PI.

In the UK the ICO has said it is reviewing the audit but that for the time being it had no plans to pursue the matter.

PI however does intend to take the case to the police.


Start Quote

The idea that this was a work of a lone engineer doesn't add up”

End Quote Simon Davies Privacy International

"I don't see any alternative but for us to go to Scotland Yard," said Mr Davies.

The revelation that Google had collected such data led the German Information Commissioner to demand it handed over a hard-disk so it could examine exactly what it had collected.

It has not yet received the data and has extended the original deadline for it to be handed over.

The Australian police have also been ordered to investigate Google for possible breach of privacy.

'Systematic failure'

According to Google, the code which allowed data to be collected was part of an experimental wi-fi project undertaken by an unnamed engineer to improve location-based services and was never intended to be incorporated in the software for StreetView.

"As we have said before, this was a mistake. The report today confirms that Google did indeed collect and store payload data from unencrypted wi-fi networks, but not from networks that were encrypted. We are continuing to work with the relevant authorities to respond to their questions and concerns," said a Google spokesman.

"This was a failure of communication between and within teams," he added.

But PI disputes this explanation.

"The idea that this was a work of a lone engineer doesn't add up. This is complex code and it must have been given a budget and been overseen. Google has asserted that all its projects are rigorously checked," said Mr Davies.

"It goes to the heart of a systematic failure of management and of duty of care," he added.

Best Link Builder, Best SEO, Top Link Builder

Friday, January 28, 2011

Facebook Statistics, Stats & Facts for 2011... Did you know 1 in every 13 people on earth uses Facebook?

Facebook Statistics, Stats & Facts for 2011... Did you know 1 in every 13 people on earth uses Facebook?

Wednesday, January 19, 2011

Google and Bing have announced that links shared

Both Bing and Google have confirmed (via an excellent interview by Danny Sullivan) that links shared through Twitter and Facebook have a direct impact on rankings (in addition to the positive second-order effects they may have on the link graph). This has long been suspected by SEOs (in fact, many of us posited it was happening as of November of last year following Google + Bing’s announcements of partnerships with Twitter), but getting this official confirmation is a substantive step forward.

In addition to that revelation, another piece of critical data came via yesterday’s announcement:


Danny Sullivan: If an article is retweeted or referenced much in Twitter, do you count that as a signal outside of finding any non-nofollowed links that may naturally result from it?


Bing: We do look at the social authority of a user. We look at how many people you follow, how many follow you, and this can add a little weight to a listing in regular search results. It carries much more weight in Bing Social Search, where tweets from more authoritative people will flow to the top when best match relevancy is used.


Google: Yes, we do use it as a signal. It is used as a signal in our organic and news rankings. We also use it to enhance our news universal by marking how many people shared an article.


Danny Sullivan: Do you try to calculate the authority of someone who tweets that might be assigned to their Twitter page. Do you try to “know,” if you will, who they are?


Bing: Yes. We do calculate the authority of someone who tweets. For known public figures or publishers, we do associate them with who they are. (For example, query for Danny Sullivan)


Google: Yes we do compute and use author quality. We don’t know who anyone is in real life :-)


Danny Sullivan: Do you calculate whether a link should carry more weight depending on the person who tweets it?


Bing: Yes.


Google: Yes we do use this as a signal, especially in the “Top links” section [of Google Realtime Search]. Author authority is independent of PageRank, but it is currently only used in limited situations in ordinary web search.


We now know that those link sharing activities on Twitter + Facebook are evaluated based on the person/entity sharing them through a score Google calls “Author Authority,” and Bing calls “Social Authority.”


We can probably predict a lot of the signals the search engines care about when it comes to social sharing; some of my guesses include:



  • Diversity of Sources – having 50 tweets of a link from one account, like having 50 links from one site, is not nearly as valuable as 50 tweets from 50 unique accounts.

  • Timing – sharing that occurs when an RSS feed first publishes a story may be valuable in QDF, but tweets/shares of older pieces could be seen as more indicative of lasting value and interest (rather than just sharing what’s new).

  • Surrounding Content – the message(s) accompanying the link may give the engines substantive information about their potential relevance and topic; it could even fill the gap that’s left by the lack of anchor text, particularly on Twitter.

  • Engagement Level – the quantity of clicks, retweets, likes, etc. (if/when measurable) could certainly impact how much weight is given to the link.

We can probably also take a stab at some of the signals Google + Bing use for Author/Social Authority in the context of the sharing/tweeting source:



  • Quantity of Friends/Followers – like links, it’s likely the case that more is better, though there will likely be caveats; low quality bots and inauthentic accounts are likely to be filtered (and may be much easier to spot than spammy links, due to the challenge they find in getting any “legitimate” friends/followers).

  • Importance of Friends/Followers – the friends/followers you have, like the link sources you have, are also probably playing a role. Earn high “authority” followers and you yourself must be a high authority person.

  • Analysis of Friends/Followers Ratios – Much like the engines’ analysis of the editorial nature of links, consideration of whether a social user is engaging in following/follower behavior purely out of reciprocity vs. true interest and engagement may be part of authority scoring. If you have 100K followers and follow 99K of them, but the engagement between you and your followers is slim, you’re likely not as authoritative as an account with 100K followers + 5K following, but those followers are constantly engaged, retweeting, liking, sharing, etc.

  • Topic Focus / Relevance – The consistency or patterns between your sharing behaviors could also be a consideration, using topic analysis, patterns in the sources of shared/tweeted links, etc. Being an “authority” could even be subject-specific, such that when a prominent SEO tweets links to celebrity news it has less of an impact than when they tweet links to a web marketing resource.

  • Association Bias – I suspect Google and Bing do a good job of associating social authors with the sites/domains they’re “part of” vs. independent from. Sometimes, this might be as easy as looking at the URL associated with the account, other times it could be based on patterns like where you most often tweet/share links to or whether your account is listed on pages from that site. Basically, if @randfish tweets links to *.seomoz.org, that probably means less than when I tweet links to bitlynews or when someone outside the company tweets links to SEOmoz.

These signals represent my opinions only, and while it’s very likely that at least some are being used, it’s even more likely that there are many more that aren’t listed above. Over time, hopefully we’ll discover more about the impact of social sharing on web rankings and how we can best combine SEO + social media marketing.


To me, the most exciting part about this is the potential to reduce webspam and return to a more purely editorial model. While people often link to, read and enjoy sources that link out manipulatively, very few of us will be likely to follow a Twitter account, friend someone on Facebook, or “like” something in a social site that’s inauthentic, manipulative or spammy. The social graph isn’t necessarily cleaner, but the complexity of spam is far lower.


Here’s to the evolution of organic marketing – search, social, content, blogs, links – it’s all coming together faster than ever before, and that’s a very good thing for holisticly minded web marketers.

Best Link Builder, Best SEO, Top Link Builder