Wednesday, June 29, 2011

Common Mistakes in Search Engine Optimization and How to Avoid Them

Search Engine Optimization (SEO) Secrets
Outsmarting Google: SEO Secrets to Winning New Business (Que Biz-Tech)
Search Engine Optimization: SEO Secrets For 2011
SEO Made Simple For 2011 - Kindle Bestseller


If you want to achieve a strong online presence, you need to optimize your website for search engines for a higher exposure rate. And on that count search engine optimization or SEO is the best option if you want an easy way to obtain huge traffic on your website. However, some beginners who commit some mistakes on SEO practices. So aside from helping their website achieve huge number of visitors, those mistakes only do harm to the website. It is therefore crucial that you know the common mistakes that you should avoid.
The use of too much flash and scripts on the website
One popular mistake in SEO is the use of too many flash and other scripts in the website. Of course, it is a plus point if your website is appealing to the eye thus many website owners try to incorporate flash pages, which are not optimized for search engine to crawl. Since there is not enough content for search engine to index your first loading page, the spider will not be interested in your website. This will result to a very low page rank. If you really want to use flash in your website, it is recommended that you make up your flash page with some text and graphics that optimized correctly for the search engine.
Not maximizing what ALT tags can do for the content and the images
This is one recommendation that should not be forgotten- the need to use the ALT tags. These tags are used to describe the images that are used in the pages. Try to incorporate these tags every time images are used in the website. And when writing these ALT tags for the images, the best SEO efforts means including some of the keywords in these tags.
Keywords are considered as the foundation of search engine optimization. However, there are some who stuff their content with too many keywords. Overstuffing and using too much keyword in the resource will not help your cause. And when the spiders find out what you have been doing, then you are only courting disaster since the page ranking may be affected. Instead of keyword stuffing, a good strategy is writing good and informative resources that readers and internet users can really appreciate. At least by doing that, your site can get a return visit from the readers.
Content duplication
It is also a mistake to steal content from other websites. Take note that robots can distinguish an original content from a plagiarized one. You need to come up with good yet informative content to entice your readers to come back. Therefore, plagiarizing content is not conducive to your page ranking.
And lastly, most of the newbie SEO practitioners overlook the search engine guidelines. It is important to read and be familiar with these guidelines so as to avoid using the SEO techniques and practices that are prohibited. One of the bad or fraudulent techniques in SEO is the black hat. Consider these actions as unethical actions at least in the realm of internet and SEO. Once this is used, you may be at the risk of getting your website banned in the search engines. To put your online marketing on the right note means steering away from these common search engine optimization mistakes and traps.

Submitted By: Seomul Evans
Published At: Isnare.com
Permanent Link: http://www.isnare.com/?aid=851403&ca=Internet

Tuesday, June 28, 2011

First Draft of DeviceOrientation Event Specification Published

Incubator Group Report: Semantic Sensor Network XG Final Report

The W3C Semantic Sensor Network Incubator Group has published their final report. As networks of sensors become more commonplace there is a greater need for their management and querying to be assisted by standards and computer reasoning. Building on the OGC's Sensor Web Enablement services-based architecture and standards, including four description languages, the group produced ontologies for describing sensors and extended a language to support semantic annotations. The report lists use-cases and reviews existing ontologies leading to the selection of the SSN ontology, analyzes examples and semantic markup as well as mapping to existing standards. The report also includes a list of directions for future work in the context of Linked Sensor Data, or Semantic Internet of Things for work at the border of Internet of Things and Internet of Service. The group plans to create a W3C Community Group to focus on the maintenance and extension of the SSN Ontology.
This publication is part of the Incubator Activity, a forum where W3C Members can innovate and experiment. This work is not on the W3C standards track.

Learn HTML and CSS with w3Schools
XML Schema: The W3C's Object-Oriented Descriptions for XML

Thursday, June 23, 2011

SEO-Optimizing your site for local searches


The Google base submission system is a platform whereby you can directly load items or categorize specific content which can then be searched on Google maps, Froogle and Google. This facilitates easy searching. To help people find you, even having a website these days is not necessary. All information can be posted on this online Google base.

Google has been promoting Google Maps on search results pages, and according to Hitwise, their promotion is paying off. As a result, Google Maps has seen a sizeable increase in their traffic in the last year. If you were to search for an address on Google before last January, you would have seen map links for Yahoo, MapQuest, and Google Maps. Now, you only see a link for Google Maps.

Local search engine listings may be little more than an afterthought to some webmasters, but they are a source of business that you shouldn’t ignore. Optimizing your site for local searches and making sure you’re listed in the local versions of the major search engines is a smart move, and doing so is fairly quick and easy. The three biggies in local search are Google Local, Yahoo! Local, and Bing Local.

Wednesday, June 22, 2011

Six Drafts Published Related to XSLT, XQuery, Xpath

W3C published six documents related to XSLT, XQuery, and XPath:
This document defines serialization of an instance of the data model as defined in [XQuery and XPath Data Model (XDM) 3.0] into a sequence of octets. Serialization is designed to be a component that can be used by other specifications such as [XSL Transformations (XSLT) Version 3.0] or [XQuery 3.0: An XML Query Language].

This document defines an XML Syntax for [XQuery 3.0: An XML Query Language] .

XML is a versatile markup language, capable of labeling the information content of diverse data sources including structured and semi-structured documents, relational databases, and object repositories. A query language that uses the structure of XML intelligently can express queries across all these kinds of data, whether physically stored in XML or viewed as XML via middleware. This specification describes a query language called XQuery, which is designed to be broadly applicable across many types of XML data sources.

This is a draft for internal review. Change markings are relative to the Recommendation of 23 January 2007.

This document defines the XQuery and XPath Data Model 3.0, which is the data model of [XML Path Language (XPath) 3.0], [XSL Transformations (XSLT) Version 3.0], and [XQuery 3.0: An XML Query Language] , and any other specifications that reference it. This document is the result of joint work by the [XSL Working Group] and the [XML Query Working Group].

XPath 3.0 is an expression language that allows the processing of values conforming to the data model defined in [XQuery and XPath Data Model (XDM) 3.0]. The data model provides a tree representation of XML documents as well as atomic values such as integers, strings, and booleans, and sequences that may contain both references to nodes in an XML document and atomic values. The result of an XPath expression may be a selection of nodes from the input documents, or an atomic value, or more generally, any sequence allowed by the data model. The name of the language derives from its most distinctive feature, the path expression, which provides a means of hierarchic addressing of the nodes in an XML tree. XPath 3.0 is a superset of [XML Path Language (XPath) Version 1.0], with the added capability to support a richer set of data types, and to take advantage of the type information that becomes available when documents are validated using XML Schema. A backwards compatibility mode is provided to ensure that nearly all XPath 1.0 expressions continue to deliver the same result with XPath 3.0; exceptions to this policy are noted in [I Backwards Compatibility with XPath 1.0].

For Review: Updated Techniques for Web Content Accessibility Guidelines (WCAG)

The Web Content Accessibility Guidelines Working Group today requests review of draft updates to Notes that accompany WCAG 2.0: Techniques for WCAG 2.0 (Editors' Draft) and Understanding WCAG 2.0 (Editors' Draft). Comments are welcome through 26 August 2011. (This is not an update to WCAG 2.0, which is a stable document.) To learn more about the updates, see the Call for Review: WCAG 2.0 Techniques Draft Updates e-mail. Read about the Web Accessibility Initiative (WAI).

Friday, June 17, 2011

SEO Ranking factors and Social Media

Website rankings are no longer about putting meta-tags and link building. The ranking factors have expanded their horizons to include the Social Media too.
Both Bing and Google have confirmed that links shared through Twitter and Facebook have a direct impact on rankings

Interview by Danny Sullivan on this topic

Danny Sullivan: If an article is retweeted or referenced much in Twitter, do you count that as a signal outside of finding any non-nofollowed links that may naturally result from it?

Bing: We do look at the social authority of a user. We look at how many people you follow, how many follow you, and this can add a little weight to a listing in regular search results. It carries much more weight in Bing Social Search, where tweets from more authoritative people will flow to the top when best match relevancy is used.

Google: Yes, we do use it as a signal. It is used as a signal in our organic and news rankings. We also use it to enhance our news universal by marking how many people shared an article.

Danny Sullivan: Do you try to calculate the authority of someone who tweets that might be assigned to their Twitter page. Do you try to “know,” if you will, who they are?

Bing: Yes. We do calculate the authority of someone who tweets. For known public figures or publishers, we do associate them with who they are. (For example, query for Danny Sullivan)

Google: Yes we do compute and use author quality. We don’t know who anyone is in real life :-)

Danny Sullivan: Do you calculate whether a link should carry more weight depending on the person who tweets it?

Bing: Yes.

Google: Yes we do use this as a signal, especially in the “Top links” section [of Google Realtime Search]. Author authority is independent of PageRank, but it is currently only used in limited situations in ordinary web search.

Wednesday, June 15, 2011

Two XML Schema Notes published: Unicode block names for use in XSD regexps; XSD datatype for IEEE floating-point decimal

The XML Schema Working Group published two Group Notes today: Unicode block names for use in XSD regular expressions and An XSD datatype for IEEE floating-point decimal. The former lists the names of character categories and character blocks defined by Unicode and used in the regular expression language defined by XSD 1.0 and XSD 1.1. The latter defines a datatype designed for compatibility with IEEE 754 floating-point decimal data, which can be supported by XSD 1.1 processors as an implementation-defined datatype. Learn more about the Extensible Markup Language (XML) Activity.

Tuesday, June 14, 2011

SEO-Importance of Relevant Content

Having content relevant to your main page or site topic is perhaps the most important SEO aspect of a page. You can put all the keywords you want in the meta tags and alt image tags, etc — but if the actual readable text on the page is not relevant to the target keywords, it ends up basically being a futile attempt.

While it is important to include as many keywords in your page copy as possible, it is equally as important for it to read well and make sense. I’m sure we’ve all seen keyword stuffed pages written by SEO companies that honestly don’t make much sense from the reader’s point of view.

When creating your site copy, just write naturally, explaining whatever information you’re discussing. The key is to make it relevant, and to have it make sense to the reader. Even if you trick the search engines into thinking your page is great — when a potential customer arrives at the site and can’t make heads or tails of your information and it just feels spammy to them — you can bet they’ll be clicking on the next web site within a matter of seconds

Friday, June 10, 2011

First Draft of CSS Regions Module Draft Published

The Cascading Style Sheets (CSS) Working Group has published the First Public Working Draft of CSS Regions Module. The CSS Regions module allows content to flow across multiple areas called regions. The regions do not necessarily follow the document order. The CSS Regions module provides an advanced content flow mechanism, which can be combined with positioning schemes as defined by other CSS modules such as the Multi-Column Module or the Grid Layout Module to position the regions where content flows. Learn more about the Style Activity.

Monday, June 6, 2011

Visibility, Timing Control for Script-Based Animations, Navigation Timing

The Web Performance Working Group published three drafts
  • a First Public Working Draft of Page Visibility, which defines a means for site developers to programmatically determine the current visibility state of the page in order to develop power and CPU efficient web applications.
  • a First Public Working Draft of Timing control for script-based animations, which defines an API web page authors can use to write script-based animations where the user agent is in control of limiting the update rate of the animation. Using this API should result in more appropriate utilization of the CPU by the browser.
  • an update to the Candidate Recommendation for Navigation Timing, which defines an interface for web applications to access timing information related to navigation and elements.