Tuesday, June 30, 2009

9 years old girl, CEO of the web designing company

tinylogo-sreelakshmi suresh




Sreelakshmi Suresh, a nine-year-old girl from Kozhikode district in Kerala and CEO of the web designing company eDesign, is being credited as the youngest web designer in the state for developing the website of the Bar Council of Kerala, BarCouncilKerala.com.


Sreelakshmi, the daughter of an advocate, is believed to have already designed and developed more than 10 websites, including the official website of her own school, a website for deaf people and a Malayalam news website for kids.


When aged just eight, Shreelaxmi designed the official website of her own school apart from another website for deaf people and a Malyalam news website for kids.


Sreelakshmi is the only member of the Association of American Webmasters, under the age of 18.


Association of American Webmasters has honoured Sreelakshmi Suresh by giving her their membership along with their highest award for excellence in web designing, the Gold Web Award. [Source: Sify/ANI]


Source: http://tungblog.atikomtrirat.com/2009/06/9-years-old-girl-develops-bar-council.html

Best Free Weblog Software Other Than WordPress

Weblog software is designed to simplify the creation and maintenance of weblogs. As specialized content management systems, weblog applications support the authoring, editing, and publishing of blog posts and comments, with special functions for image management, web syndication, and moderation of posts and comments.




We all know WordPress is one the most popular weblog applications. What other options can we choose? We have collected the following Top Quality Weblog Software Other Than WordPress for you.

melody


Melody is an open source content management, blogging and publishing platform, derived from the popular blogging tool Movable Type. At its onset, Melody is distinct from Movable Type by having put in place a set of processes that assist the greater Movable Type and Melody communities to contribute features, changes and fixes more freely and quickly back to the core product. This allows independent developers to pursue the features and preferences that matter to them most.


The promise of Melody is to serve its community by working to build a product according to the principles and philosophies that have helped make WordPress, Apache, Linux, Firefox and other open source applications so successful. Those principles include creating and fostering open communication channels with its user base, by building transparency into its roadmap, and by permitting users to more freely contribute back to the core product.

Pricing: Free
Source: http://openmelody.org/




textpattern.png


When it comes to publishing on the internet, beginners and experts alike are met with a bothersome paradox: word processors and graphics applications allow anyone to do a pretty good job of managing text and images on a personal computer, but to make these available to the worldwide web – a seemingly similar environment of documents and destinations – ease of use vanishes behind sudden requirements for multilingual programming skills, proficiency in computer-based graphic design, and, ultimately, the patience of a saint.


Textpattern is a flexible, elegant and easy-to-use content management system and it is both free and open source. Textpattern is a web application designed to help overcome these and other hurdles to publishing online, and to simplify the production of well-structured, standards-compliant web pages.

Pricing: Free
Source: http://www.textpattern.com/




drupal.png

Drupal is a free software package that allows an individual or a community of users to easily publish, manage and organize a wide variety of content on a website. Drupal is open-source software distributed under the GPL (�General Public License�) and is maintained and developed by a community of thousands of users and developers.

Pricing: Free
Source: http://drupal.org/




radiant.png

Radiant is a no-fluff, open source content management system designed for small teams with the following main features. Built from the ground up to be as simple as possible, Radiant features an elegant administrative interface that centers around three key components: pages, snippets, and layouts.


Radiant is licensed under the MIT License. This means that Radiant is free for commercial and non-profit use. It also means that you are free to modify and distribute Radiant as long as you don’t remove the appropriate notices from the source code.

Pricing: Free
Source: http://radiantcms.org/




frog.png

Frog CMS is a PHP version of Radiant CMS, a well known Ruby on Rails application. Frog CMS share the goal to simplify content management, and offer an elegant user interface, flexible templating per page, simple user management and permissions and all what you need for your files management.


Frog requires a MySQL database with InnoDB support, a web server (Apache with mod_rewrite is highly recommended) and is distributed under the MIT software license.

Pricing: Free
Source: http://www.madebyfrog.com/




xoops.png

XOOPS is an acronym of eXtensible Object Oriented Portal System. Though started as a portal system, XOOPS is in fact striving steadily on the track of Content Management System. It can serve as a web framework for use by small, medium and large sites.


A lite XOOPS can be used as a personal weblog or journal. For this purpose, you can do a standard install, and use its News module only. For a medium site, you can use modules like News, Forum, Download, Web Links etc to form a community to interact with your members and visitors. For a large site as an enterprise one, you can develop your own modules such as eShop, and use XOOP’s uniform user management system to seamlessly integrate your modules with the whole system.

Pricing: Free
Source: http://www.xoops.org




movable-type.png

The Movable Type Open Source Project was announced in conjunction with the launch of the Movable Type 4 Beta on June 5th, 2007. The MTOS Project is a community and Six Apart driven project that will produce an open source version of the Movable Type Publishing Platform that will form the core of all other Movable Type products.

Pricing: Free
Source: http://www.movabletype.org




modx.png

MODx is 100% buzzword compliant, and makes child’s play of building content managed sites with validating, accessible CSS layouts – hence Ajax CMS. It empowers its users to build engaging “Web 2.0? sites today, with its pre-integrated MooTools, Scriptaculous and Prototype libraries. If you’re a CSS designer or Ajax aficionado, this is the CMS for you; and if you like what you see today, you’ll love what’s coming.


Techies call MODx a Content Management Framework (”CMF“): equal parts custom web app builder and Content Management System (”CMS“). With a flexible API and a robust event override system, MODx makes building engaging web projects straightforward — or changing core functionality without hacking the core code possible. Custom tweaks won’t leave you pulling out your hair when it’s time to upgrade.

Pricing: Free
Source: http://www.modxcms.com/




snews.png

sNews is a completely free, standards compliant, PHP and MySQL driven content management system. Consisting of only one small core file, sNews is extremely lightweight, easy to install, and easy to use via a simple web interface.


With sNews you can create a website that consists of dynamic pages and have a blog at the same time. sNews has a front page option and categories for you to categorize your articles. Corporate CMS or a personal blog - your choice.


sNews uses search engine friendly URLs throughout, to make your website truly loved by Google and other search engines (as well as your visitors). This makes a sNews driven site highly Search Engine Optimized from the get-go and will greatly assist in your site obtaining a good SEO ranking.

Pricing: Free
Source: http://snewscms.com/




opencms.png

OpenCms from Alkacon Software is a professional, easy to use website content management system. OpenCms helps content managers worldwide to create and maintain beautiful websites fast and efficiently.


The fully browser based user interface features configurable editors for structured content with well defined fields. Alternatively, content can be created using an integrated WYSIWYG editor similar to well known office applications. A sophisticated template engine enforces a site-wide corporate layout and W3C standard compliance for all content.

Pricing: Free
Source: http://www.opencms.org/




ocportal.jpg

ocPortal is a CMS (Content Management System) that allows you to create and manage your interactive and dynamic website from an easy to use administration interface. ocPortal is unique by the combination of a vast and diverse range of provided functionality, out-of-the-box usability, and an ability for unlimited customisation.


ocPortal is completely free under an OSI-approved Open Source licence since version 4.0. There are no restrictions on usage, and they don’t keep away any features from users.

Pricing: Free
Source: http://ocportal.com/




habari.png

Habari is being written with a firm understanding of the current state of blogging. Most other blogging packages have been around long enough that their responses to things like comment spam and Digg site overloads are bolted on after the fact; whereas Habari is being written from the beginning to take these things — and more — into account.


Habari strongly favors open, standard, and documented protocols. Atom, being both open and documented, is the prefered syndication format, and the Atom Publishing Protocol is the prefered means of remote communication with your site. This is a core feature, and not a plugin.

Pricing: Free
Source: http://habariproject.org




dotcms.png

dotCMS Core is the most feature complete open source web CMS available today. Available under the GPL, you can freely download the dotCMS system and get up, running and managing web sites and web content in a matter of minutes.


Unlike many other “open sourceâ€� cms, dotCMS is in no way crippled. When you download the dotCMS system, you are getting all of the functions and features needed to begin creating customized, manageable and scalable web sites. This includes so called “advanced”� features such as LDAP/AD integration, clustering support, online templating, Web2.0 Calendaring and the structured content engine.

Pricing: Free
Source: http://www.dotcms.org/




symphony.png

Symphony is a web publishing system made for web developers. It gives you all the power and flexibility you’ll need, while keeping out of your way.


Symphony lets you organise everything the way you like, from your publishing environment to your website’s URL structure. Built to be versatile and customisable, Symphony really is what you make of it.


Symphony’s templating engine is pure XSLT goodness. XSLT is a standard recommended by the W3C, so learning Symphony means that you’re learning skills that you can also use outside of the system. If you already know the XML and CSS standards, then chances are you should be able to quickly pick up XSLT.

Pricing: Free
Source: http://symphony-cms.com/


Source: http://www.blogperfume.com/best-free-weblog-software-other-than-wordpress/

Thursday, March 26, 2009

Tips on how to improve the PageRank

LINKS!
------
PageRank is a calculation of the popularity of a web page, based on
the quality and number of links that point to it. Basically a link to
your page is seen as a vote of confidence, and a vote from a site that
has high PageRank is much more valuable. Nobody except Google search
engineers know exactly how it is calculated, but it is quite certain
that a link from a PR7 site is worth dozens of links from PR2 sites.

Google say:

"PageRank Technology: PageRank performs an objective measurement of
the importance of web pages by solving an equation of more than 500
million variables and 2 billion terms. Instead of counting direct
links, PageRank interprets a link from Page A to Page B as a vote for
Page B by Page A. PageRank then assesses a page's importance by the
number of votes it receives.

PageRank also considers the importance of each page that casts a vote,
as votes from some pages are considered to have greater value, thus
giving the linked page greater value. Important pages receive a higher
PageRank and appear at the top of the search results. Google's
technology uses the collective intelligence of the web to determine a
page's importance. There is no human involvement or manipulation of
results, which is why users have come to trust Google as a source of
objective information untainted by paid placement. "
://www.google.com/corporate/tech.html

To see the PageRank of a page, you need to install the Google Toolbar:
http://toolbar.google.com/

So the best way to improve your PageRank is to get some links from
high PR sites. You are correct in assuming that link exchanges are not
the way to go - they are forbidden in Google Guidelines:

"Don't participate in link schemes designed to increase your site's
ranking or PageRank."
://www.google.com/webmasters/guidelines.html

Link swapping is also very easy for Google to detect, seeing as their
algorithm revolves around link relationships.


Buy Links
---------
A growing trend is to approach a webmaster of a high PR site and offer
to pay for a link. Google has no way of determining whether money
changed hands, and the link will be as valuable as an organic one.

I sell one link on my PR6 site for $25 a month, but if the ad was less
relevant to my site's content I'd charge much more.


Directories
-----------
The biggest directories have quite high PageRank. Open Directory is
the best free directory to be listed in. A category that seems to be
suited to your site has a PageRank of 4:

http://dmoz.org/Shopping/Health/Disabilities/Assistive_Technology/Mobility_and_Transportation/Canes_and_Walking_Sticks/
(Click on "suggest URL" at the top of the page)

The corresponding category at Yahoo has PR5, but also costs $299 to be included:
http://dir.yahoo.com/Business_and_Economy/Shopping_and_Services/Disabilities/Assistive_Technology/Mobility_Assistance/Walking_Assistance/
(Click on "Suggest a Site" at the top)


Copy your Competitors
---------------------
Use discretion (ie avoid link exchanges), but otherwise it's quite
easy to find out where your competitors get their links from. Visit
their homepage, and in the Page Info dropdown box of the Google
Toolbar, select "Backward Links". You will then see a list of pages
that link to the page you are visiting. It won't list every page, but
does list all the ones worth getting a link from.

The two directory categories mentioned above are a good place to find competitors.

Asking for a link is quite simple:

1) Compliment them on their site
2) Suggest your site as one their visitors might find useful

Usually a site has a email address displayed, otherwise just try the
standard webmaster@blah.com


Example:

Looking at the Backward Links for Thomas Fetterman's site, we see they
are linked to from:
http://www.abledata.com/text2/assistiv.htm#Walking

A visit to their homepage finds a link to their Contact Us page:
http://www.abledata.com/text2/contact.htm

The site is regularly updated, links are free, the page has PR5. A
link from just that page could improve your site's PageRank.


Your Site
---------
Looks pretty good from a search engine point of view. In general I
wouldn't change a thing. Having a

Headline

on each page that
is the same as the page title could improve things slightly.

More content is always a plus, but obviously difficult to come up with.


Hosting
-------
In my experience changing hosting companies would not normally affect
PageRank, but could do if they require that you link to them from your
site, or if they secretly use cloaking (rare, but a GA Researcher
recently uncovered a hosting service that did this). I wouldn't worry
about it.


As I indicated in my initial comment, it is links that you need. Get a
handful of PR5+ links and you should climb in PR and in the search
results.

Source: http://tungblog.atikomtrirat.com/2009/03/tips-on-how-to-improve-pagerank.html

Monday, March 09, 2009

Lesson (2): Crawler-Based Search Engines

In the previous lesson we discussed how crawler-based engines work. Typically, special crawler software visits your site and reads the source code of your pages. This process is called "crawling" or "spidering". Your page is then compressed and put into the search engine's repository which is called an "index". This stage is referred to as "indexing". Finally, when someone submits a query to the search engine, it pulls your page out of the index and gives it a rank among the other results it has found for this query. This is called "ranking".


Usually for indexing, crawler-based engines consider many more factors than those they can find on your pages. Thus, before putting your page into an index, a crawler will look at how many other pages in the index are linking to yours, the text used in links that point to you, what the PageRank is of linking pages, whether the page is present in directories under related categories, etc. These "off-page" factors are a significant consideration when a page is evaluated by a crawler-based engine. While theoretically, you can artificially increase your page relevance for certain keywords by adjusting the corresponding areas of your HTML code, you have much less control over other pages in the Internet that are linking to you. Thus, off-page relevance prevails in the eyes of a crawler.



In this lesson, we look at the main spider-based search engines, and learn how we can get each of them to index our site and rank it highly. Although this step does not closely deal with the optimization process itself, we provide information on how each search engine looks at your pages so that you can come back to this section for later reference.


Google (http://www.google.com)

Google is the number one search engine among such giants of the SEs' market as Yahoo! and Live Search. Its search share is over 60%. Google indexes billions of Web pages, so that users can search for the information they desire. Plus it creates services and tools including Web applications, advertising networks and solutions for businesses to hold on the leading position successfully.


You can submit your site to Google here: http://www.google.com/addurl/ and your site will probably be indexed in around 1-2 months.



Alternatively, you can sign in to your Google account, go to Google Webmaster Tools and submit a sitemap of your site. For more info on creating sitemaps, please refer to Lesson "Creating a Search Engine Friendly Site Map".


Please keep in mind that Google may ignore your submission request for a significant period of time. Even if it happens to crawl your site, it may not actually index it if there are no links pointing to it. However, if Google finds your site by following the links from other pages that have already been indexed and are regularly re-spidered, chances are you will be included without any submission. These chances are much higher if Google finds your site by reading a directory listing, such as DMOZ (www.dmoz.org).


So submit your site and it may help, but links are the best way to get indexed.


In the past, Google typically performed monthly updates called the "Google Dance" among the experts. At the beginning of the month, a deep crawl of the web took place, then after a couple of weeks the PageRank for the retrieved pages was calculated, and at the end of the month the index database was finally updated. Nowadays, Google has switched to an incremental daily update model (sometimes referred to as everflux) so the concept of Google dance is quickly becoming historical.


The "Dance" took place from time to time but only when they need to make major changes to their algorithm. For example, their dance in November 2003 (known as Google Florida Update) was actually their first after about six months. In January 2004, Google started another dance (Austin Update) where pages that had disappeared during the "Florida" showed up again, and many pages that hadn't disappeared the first time were gone.



In February 2004 Google updated once more and things settled down. Most people who had lost pages saw them return and although the results were rather different than those shown before Florida, at least pages didn't seem to disappear for no reason.



Google claims to have 1 trillion (as in 1,000,000,000,000) unique URLs in its index. The engine constantly adds new pages to the index database - usually it takes around two days to list a new page after the Googlebot (Google's spider) has crawled it. The Google team works industriously towards algorithm perfection to keep their leading position amongst search engines.


These days, Google maintains a database which is continuously updated. Matt Cutts (head
of Google's Webspam team) reported in his personal blog that : «Google switched to an index that was incrementally updated every day (or faster). Instead of a monolithic monthly event, the Google would refresh some of its index pretty much every day, which generated much smaller day-to-day changes that some people called everflux


Google has lots of so-called "regional" branches, such as "Google Australia", "Google Canada," etc. These branches are modifications of their index database stored on servers located in the corresponding regions. They are meant to further adjust search results to searcher needs: when you're searching, Google detects your IP address (and thus approximate location) and feeds the results from the most appropriate index database.



Submission to the "Main Google" will list your site in all its regional branches - after Google indexes you, of course.



Google has a number of crawlers to do the spidering. They all have the name "GoogleBot" but they come from a number of different IP addresses. You can see if Google has visited your site by looking through your server logs: just find the IP address something like 82.110.xxx.xx and most probably you will see the user-agent defined as GoogleBot ("Googlebot/2.1+(+http://www.google.com/bot.html)"). Verify that ip address using the reverse DNS lookup to find out the registered name of the machine. Generally all Google crawlers host names will end with 'googlebot.com'



Google is by far the most important search engine. Apart from their own site receiving 350 million searches per day, they also provide the search results for AOL Search, Netscape Search, Ask.com, Iwon, ICQ Search and MySpace Search. For this reason, most optimizers first focus on Google. Generally, this makes sense.



How to optimize for Google

Most important for Google are three factors: PageRank, link anchor text and semantics.


PageRank is an absolute value which is regularly calculated by Google for each page it has in its index. Later in this course we will give you a detailed description, but for now it's just important to know that the number of links you've got from other sites outside your domain matters greatly, as well as the link quality. The latter means that in order to give you some weight, the sites linking to yours must themselves have high PageRank, be content-rich and regularly updated.



MiniRank/Local Rank is a modification of the PageRank based on the link structure of your single site only. Since search engines rank pages, not sites, certain pages of your site will rank higher for given keywords than others. Local Rank has a significant influence on the general PageRank.


Anchor text is the text of the links that point to your pages. For instance, if someone links to you with the words "see this great website", this is a useless link. However, let's say you sell car tyres and a link from another site to yours says "car tyres from leading brands", such a link will boost your rank when someone searches for car tyres on Google.



Semantics is a new factor that appears to have made the biggest difference to the results. This term refers to the meaning of words and their relationships. Google bought a company called Applied Semantics back in 2003 and has been using the technology for their AdSense contextual advertising program. According to the principles of applied semantics, the crawler attempts to define which words mean the same thing and which ones are always used together.



For example, if there are a certain number of pages in Google's index saying that an executive desk is a piece of office furniture, Google associates the two phrases. After this, a page about executive desks using the keywords "office furniture" won't show up in a search for the keywords ''executive desk". On the other hand, a page that mentions "executive desk" will rank better if it mentions "office furniture".


Now, there are two other terms related to Google's way of ranking pages: Hilltop and Sandbox.


Hilltop is an algorithm that was created in 1999. Basically, it looks at the relationship between "Expert" and "Authority" pages. An "Expert" is a page that links to lots of other relevant documents. An "Authority" is a page that has links pointing to it from the "Expert" pages.


In theory, Google would find "Expert" pages and then the pages that they link to would rank well. Pages on sites like Yahoo, DMOZ, college sites and library sites would be considered experts.



Sandbox refers to Google's algorithm which detects how old your page is and how long ago it has been updated. Usually pages with stale content tend to gradually slip down the result list, while new pages just crawled initially have higher positions than they would if based on PageRank only. However, some time after gaining boosted positions, new website disappear from the top places in search results, since Google wants to verify whether your website is really continued and was not created with the sole purpose to benefit from artificially high rankings over the short term. The period when a website is unable to make it to the top of search results is referred to as "being in the sandbox". This can last from 6 months to one year, then the positions usually restore gradually. However, not all brand new site owners observe the sandbox effect on their own sites, which has led to discussions on whether the sandbox filter really exists.



On-page factors considered by Google

Now that we've examined off-page factors that have primary importance for Google, let's take a look at on-page factors that should be given attention before submitting to Google.


Google does not consider the META keyword tag for counting relevancy. While your META description tag contents can be used by Google as the description of your site in the search results, the META description does not have any influence for relevancy count.


Nowadays META tags don't influence website position in search results absolutely. They can be of use as additional information source about the Web page for surfers only.


When targeting optimization for Google, be sure to use your keywords in the following:

  • Your domain name - important!
  • First words of the TITLE tag; HTML heading tags H1 to H6;
  • ALT text as long as you also describe the image;
  • Quality content on your index page. Try to make the length of your home page at least 300 words, however, don't hide anything from visitors' eyes (VERY IMPORTANT!).
  • Link text for outgoing links.
  • Drop-down form boxes created with the help of the SELECT tag.
  • Finally, try to have some keywords in BOLD.

Additionally, try to center your pages around one central theme. Use synonyms of your important keyword phrases. Keep everything on the page on that ONE main topic, and provide good, solid content.



Pages that are optimized for Google will score best when there are at least a few links to outside sites that are related to your topic because this establishes your page's reputation as an authority. Google also measures how many websites outside your domain have links pointing to your site and factor in an "importance rating" for each of those referring sites. The more popular a site appears to a search engine, the higher up in the search listings they will place it.


According to Craig Silverstein with Google,


"External links that you grant from a particular page on your website can become diluted. In other words, if you place 10,000 links to other Web pages from a particular page of your website, each link is less powerful than if you were to link to only five other Web pages. Or, the contribution value to another website of each individual link is weakened the more you grant."


Yahoo! (http://www.yahoo.com)

You can submit your URL to Yahoo! Search for free here: http://submit.search.yahoo.com/free/request (note: you need to register at their portal first) and it will be indexed in about 1-2 months.



Yahoo! is still the most popular site on the Web, according to its traffic rank reported by Alexa (www.alexa.com). Nevertheless, in terms of the number of searches performed Google carries the day.


Yahoo! provides results in a number of ways. First, it has one of the most complete directories on the Web. There's also Yahoo! Search, which lists results in a way similar to other crawler-based engines. Here, in this section on crawler-based engines, we deal with the second service.


Sponsored results are found at the top, side, and bottom of the search results pages fed by Yahoo!. Yahoo! now owns its Yahoo! Search Marketing pay engine and provides search results to AltaVista, AllTheWeb and Lycos.


The search results at Yahoo! changed in February 2004. For the previous couple of years, Google was their search-results supplier. Nowadays, Yahoo! is using its own database. Yahoo! bought engines that had earlier pioneered the search world, Yahoo! Search officially provides all search engines acquired through these acquisitions with its own results. Therefore, when you optimize your Web pages for Yahoo!, there's a good chance of appearing in the top results of other popular search engines, such as AllTheWeb and AltaVista.


To find out if Yahoo's spider has visited your site, search the following information in your server logs. Their crawler is now called Yahoo Slurp (formerly, it was just Slurp). For each request from 'Yahoo! Slurp' user-agent, you can start with the IP address (i.e. 74.6.67.218). Then check if it really is coming from Yahoo! Search using the reverse DNS lookup. The name of all Yahoo! Search crawlers will end with 'crawl.yahoo.net,'.


To rank well in Yahoo, you need to do the same things that help your rankings in Google. Off-page factors (link popularity, anchor text, etc.) are very important. Some experts consider it easier to rank well on Yahoo because your own internal links are more important and there also appears to be no requirement for link relevancy. Whereas Google claims that the PageRank of the relevant linking sites is worth more than the PageRank of irrelevant sites, links don't need to be relevant to do well on Yahoo.



Like all other search engines, they'll list you for free if you get links to your site. Much like Google, their crawler is very active and updates listings on a daily basis. However, it can take a few weeks for Yahoo to list new pages after they have found and crawled through referring links. The pages that have already been included in the listing are updated much more often, usually every several days.


In March 2004, Yahoo launched its paid inclusion program called Site Match. You can find out details about Yahoo's paid inclusion program here: http://searchmarketing.yahoo.com/srch/index.php and the pricing here - http://searchmarketing.yahoo.com/srchsb/sse_pr.php?mkt=us


Site Match guarantees your site will appear in non-sponsored search results on Yahoo!, and other portals such as AltaVista and AllTheWeb.


Let's review some quick insights into the factors on your pages that will help you rank higher in Yahoo: use keywords in your domain name and use keywords in the TITLE tag. Your title must include your most important keyword phrase once toward the beginning of the tag. Don't split the important keyword phrase in your title!



META Keywords and META Description tags.



The Yahoo! family of search engines DOES NOT consider META tags when estimating relevancy. Use keywords in heading tags H1 through H6; Use keywords in link anchor text and ALT attributes of your images; the body main content and page names (URLs) need to have keywords in them too; the recommended keyword weight in the BODY tag for Yahoo is 5%, maximum 8%. A catalog page with lots of links, for instance a site map, will help a lot for your indexing and ranking by Yahoo!.



You can submit just the main page to Yahoo!, and let its spider find, crawl, and index the rest of your pages. If it doesn't find an important page, however, make sure you submit it manually.


Like with all of the other engines, solid and legitimate link popularity is considered important by Yahoo's spider as a ranking factor.



Yahoo! frowns upon having satellite sites that revolve around the theme of a main site. For example, if you sell office furniture and set up a main company site and then plant several satellite sites for each kind of furniture, it may seem suspicious to Yahoo!. Therefore, make sure that each site is a stand-alone site and serves a unique purpose, and that it's valuable to both the search engines and your users.



As with any other search engine it is vitally important for Yahoo! that you create valuable content for your search visitors.


Windows Live Search (www.live.com)

You can submit your site to Live Search (formerly known as MSN Search) for free at http://search.msn.com/docs/submit.aspx, however they are sure to find it without your submission if you have links from sites already listed there.



MSN stands for Microsoft Network and was initially meant to be Microsoft's solution for Web search, among other goals. Nevertheless, it was powered by Inktomi's results and did not have its own crawler.


Since February 2005, MSN switched from another engine result base and introduced its own Web crawler. And their next significant step was on September 11, 2006 when Live Search release replaced MSN Search. For this moment Live Search is one of the most popular of world-class search engines, with around 11% of all search traffic. It definitely makes sense to target placement at the top of Live Search's listings as the amount of traffic you will receive as a result is considerable. However, with Live Search it's especially important to avoid spam methods since they claim to use a sophisticated series of technologies to fight against even potential spammers. PC magazine has published an article that states:



"Spammers are increasingly trying to weasel their way into search engine results, and Microsoft hopes that filtering them out can be one area where its tool can outshine Google's."
For more information on this article, visit http://www.pcmag.co.uk/news/1155758.


The information for finding the Live Search spider in your server visit logs is as follows: The Spider's name is MSNBot, with the IP addresses something like 65.55.xx.xx and 65.64.xx.xx, host msnbot.msn.com, and user agent "msnbot/1.1 (+http://search.msn.com/msnbot.htm)".


At this stage, the general rules of optimizing for Live Search would be similar to the optimization rules for other search engines. Get your site listed in the directories, obtain solid and quality link popularity, balance your keyword theme. You may also consider purchasing an ad from Live Search to get listed in the sponsored results. Live Search equally treats both off-page and on-page factors when ranking pages.


FAST Search / AllTheWeb and AltaVista (http://www.alltheweb.com)

FAST (now called Fast Search and Transfer) is a Norwegian company that specializes in search and filter technology solutions. Some time ago it built the AllTheWeb search engine. In 2003, AltaVista and AllTheWeb were bought out by the Overture engine, the second being bought from FAST Search. Overture, in its turn, was purchased by Yahoo!. It is through this route that AllTheWeb has joined the Yahoo! family of search engines. As a result, AllTheWeb's spider is no longer crawling the Web, and you can no longer submit to the engine. Its XML feed and paid inclusion programs have been changed over to Yahoo! Search Marketing programs. Submitting to Yahoo! Search (and getting indexed there) will get your pages in AltaVista, Yahoo! Web Results, AllTheWeb, Live Search, HotBot and other engines.


AltaVista (http://www.altavista.com) was once a big player in the search industry. Until about seven years ago they could claim to be the most used search engine. Since that time, this engine has lost its independence and much of its popularity. Nowadays, it's sending very little traffic to websites. As with AllTheWeb, it was purchased by Yahoo together with Overture.


Ask.com (http://www.ask.com)

Ask.com (formerly Ask Jeeves) is the last crawler-based search engine we are going to speak about. Ask.com supplies its search results to the Iwon search engine and is a member of the Google family of search engines.


It was designed as Ask Jeeves, where "Jeeves" is the name of the "gentleman's personal gentleman", or valet (illustrated by Marcos Sorensen), fetching answers to any question asked. On September 23, 2005 the company announced plans to phase out Jeeves and on February 27, 2006 the character was disassociated with Ask.com (according to the Wikipedia).


Ask.com has acquired DirectHit and Teoma which were big players in the search industry too. DirectHit was a search engine that provided results based on click popularity. Therefore, sites that received the most clicks for a particular keyword were listed at the top. Teoma was also unique due to its link popularity algorithm. Teoma claimed that it produces well ordered and relevant search results by initially eliminating all the irrelevant sites and then considering the popularity of only those that relate to the search subject in the first place.


Optimizing for Google will guarantee your appearance on the Ask.com results along with the other large engines of Google's family (AOL, Netscape and Iwon).

Sunday, February 15, 2009

How Much Bandwidth is Required for VoIP Phones?

A long-standing question for potential VoIP (Voice over Internet Protocol) consumers is “How much bandwidth does a VoIP phone require to make quality telephone calls?”


First of all, Bandwidth is defined as the ability to transfer data (such as a VoIP telephone call) from one point to another in a fixed amount of time. The higher the bandwidth speed you have, the more data you can send over your Broadband Internet connection.


There are two types of bandwidth at your location: upload bandwidth and download bandwidth. The Upload Bandwidth is the amount of data you can send to the Internet and download bandwidth is the amount of data you can receive from the Internet. The more Internet bandwidth you have from your ISP (Internet Service Provider) the better.


In most cases, the normal VoIP telephone call will use up 90 Kbps (kilobits per second). If you have a Broadband Internet service provider that doesn’t offer much bandwidth then most VoIP providers give you the option to lower the VoIP voice quality by lowering the bandwidth used for VoIP calls to 60 Kbps or, to really conserve your bandwidth, 30 Kbps. Most people can't tell the difference between the three settings. We suggest you use the high sound quality setting (90 Kpbs in most cases), if bandwidth is not an issue. High VoIP voice quality is generally the default setting but if you are running into a situation where your bandwidth is limited then you can adjust your VoIP bandwidth to one of the lower settings. Some consumers with 128 Kbps upload connections can receive less VoIP service quality due to a poor quality ISP (Internet Service Provider). By selecting a lower quality VoIP bandwidth setting, this problem can be avoided.


If you plan on using a VoIP service provider, should you get a DSL or a Cable Internet access provider? In general, DSL upload bandwidth starts at 128k where as Cable Internet upload bandwidth starts at around 600k. Cable Internet is a little bit more expensive, but it is also about 4-5 times faster than residential DSL and a bit friendlier to a VoIP telephone call. Having said that, both DSL and cable modem high-speed services provide sufficient broadband Internet access bandwidth to support any of the top VoIP service providers. If you are experiencing low Broadband Internet Service provider bandwidth, we suggest you try Packet8 VoIP. Packet8 VoIP boasts an advanced compression technology in which each active voice line uses approximately only 23Kbps of total data throughput, upstream and downstream.


The amount of bandwidth that a VoIP provider requires to make a quality telephone call is only one thing to consider when choosing a VoIP service provider. In fact, there are many things to consider when choosing a VoIP provider. An educated consumer generally results in a satisfied consumer.

Things to Consider When Selecting a VoIP Provider

The following are very important factors to consider when you are selecting a Voice over IP provider. Educate yourself and be informed before you choose.



Monthly costs: A VoIP provider can save you up to 75% on your telephone/long distance expenses. There are many Voice over IP providers out there so it will benefit you from shopping around. Unlimited calling packages can range from $19.95/month to as high as $54.95/month. Usually the lower priced providers have more customers and are able to offer the service at a lower price due to a lower overhead per subscriber



VoIP Product Features: Not all VoIP providers are created equal. Voice over IP offers a great value to the consumers because of the drastically reduced long distance costs as well as inexpensive local phone service with lots of enhanced features. Some providers offer more features than others. Features like Call Waiting, 3 Way Calling, etc. are usually included in the VoIP monthly cost where as the traditional phone companies will charge up to and above $5/month per feature. When shopping for a VoIP provider, be sure to compare VoIP providers by features as well as by monthly price.



Keeping Your Number: Some providers allow you to transfer (port) your current phone number to the VoIP service and some providers do not. It is not recommended to switch your home number to the VoIP service immediately. It is recommended that you try out the service and see if you are satisfied before you request that your current number be switched. Keep in mind that if you have DSL service, you must retain a phone number with the service provider of the DSL because the DSL service is provided over that telephone line. If you want to get rid of your current phone company all together, then we suggest you use a Cable Internet Service Provider.



911 Service: Most of the Voice over IP carriers offer E911 service, but not all. Be sure to check if the Voice over IP provider offers E911 because it is not a given. If the VoIP provider does not offer E911, then we suggest that you either have a cell phone or traditional landline to use in case of an emergency. (Note: It is also important to point out that if you take your VoIP phone when traveling, E911 has no way of knowing where you are when you call 911 if you are away from the registered address.)



International Calling: If you make a lot of international calls, you will want to do a lot of research on International Rates as they vary by provider. There are a few carriers that offer unlimited calling to certain countries.



Money Back Guarantee: Since VoIP is a relatively new product; most all VoIP providers will offer a free money back guarantee. Be sure to check with each provider as we have seen the money back guarantees range from a 14-day to a 30-day money back guarantee. (Note: Be sure to keep the original packaging that your equipment came in just in case you need to send it back)



This is only a short list. In fact, there are many things to consider when choosing a VoIP provider. An educated consumer generally results in a satisfied consumer.

Friday, January 16, 2009

Lesson (1): Classification of Search Engines

The term "search engine" (SE) is often misused to describe both directories and pure search engines. In fact, they are not the same; the difference lies in how result listings are generated.


There are four major search engine types you should know about. They are:

  • crawler-based (traditional, common) search engines;
  • directories (mostly human-edited catalogs);
  • hybrid engines (META engines and those using other engines' results);
  • pay-per-performance and paid inclusion engines.

Crawler-based SEs, also referred to as spiders or Web crawlers, use special software to automatically and regularly visit websites to create and supplement their giant Web page repositories.


This software is referred to as a "bot", "robot", "spider", or "crawler". All these terms denote the same concept. These programs run on the search engines. They browse pages that already exist in their repositories, and find your site by following links from those pages. Alternatively, after you have submitted pages to a search engine, these pages are queued for scanning by a spider; it finds your page by looking through the lists of pages pending review in this queue.


After a spider has found a page to scan, it retrieves this page via HTTP (like any ordinary Web surfer who types an URL into a browser's address field and presses "enter"). Just like any human visitor, the crawling software leaves a record on your server about its visit. Therefore, it?s possible to know from your server log when a search engine has dropped in on your online estate.


Your Web server returns the HTML source code of your page to the spider. The spider then reads it (this process is referred to as "crawling" or "spidering") ? and this is where the difference begins between a human visitor and crawling software.


While a human visitor can appreciate the quality graphics and impressive Flash animation you've loaded onto your page, a spider won't. A human visitor does not normally read the META tags, a spider can. Only seasoned users might be curious enough to read the code of the page when seeking additional information about the Web page. A human visitor will first notice the largest and most attractive text on the page. A spider, on the other hand, will give more value to text that's closest to the beginning and end of the page, and the text wrapped in links.


Perhaps you've spent a fortune creating a killer website designed to immediately captivate your visitors and gain their admiration. You've even embedded lots of quality Flash animation and JavaScript tricks. Yet, a search engine spider is a robot which only sees that there are some images on the page and some code embedded into the "script" tag that it is instructed to skip. These design elements are additional obstacles on its way to your content. What's the result? The spider ranks your page low, no one finds it on the search engine, and no one is able to appreciate the design.


SEO (search engine optimization) is the solution for making your page more search-engine friendly. The optimization is mostly oriented towards crawler-based engines, which are the most-popular on the Internet. We're not telling you to avoid design innovations; instead, we will teach you how to properly combine them with your optimization needs.


Let's return to the way a spider works. After it reads your pages, it will compress them in a way that is convenient to store in a giant repository of Web pages called a search engine index. The data are stored in the search engine index the way that makes it possible to quickly determine whether this page is relevant to a particular query and to pull it out for inclusion in the result page shown in response to the query. The process of placing your page in the index is referred to as "indexing". After your page has been indexed, it will appear on search engine results pages for the words and phrases most common on the indexed Web page. Its position in the list, however, may vary.


Later, when someone searches the engine for particular terms, your page will be pulled out of the index and included in the search results. The search engine now applies a sophisticated technique to determine how relevant your page is to these terms. It considers many on-page and off-page factors and the page is given a certain position, or rank, within other results found for the surfer's query. This process is called "ranking".

Google (www.google.com) is a perfect example of a crawler-based SE.


Human-edited directories are different. The pages that are stored in their repository are added solely through manual submission. The directories, for the most part, require manual submission and use certain mechanisms (particularly, CAPTCHA images) to prevent pages from being submitted automatically. After completing the submission procedure, your URL will be queued for review by an editor, who is, luckily, a human.


When directory editors visit and read your site, the only decision they make is to accept or reject the page. Most directories do not have their own ranking mechanism ? they use various obvious factors to sort URLs, such as alphabetic sequence or Google PageRankTM (explained later in this course). It is very important to submit a relevant and precise description to the directory editor, as well as take other parts of this manual submission seriously.


Spider-based engines often use directories as a source of new pages to crawl. As a result, it's self-evident in SEO that you should treat directory submission and directory listings as seriously and responsibly as possible.


While a crawler-based engine would visit your site regularly after it has first indexed it, and detect any change you make to your pages, it's not the same with directories. In a directory, result listings are influenced by humans. Either you enter a short description of your website, or the editors will. When searching, only these descriptions are scanned for matches, so website changes do not affect the result listing at all.


As directories are usually created by experienced editors, they generally produce better (at least better filtered) results. The best-known and most important directories are Yahoo (www.yahoo.com) and DMOZ (www.dmoz.org).


Hybrid engines. Some engines also have an integrated directory linking to them. They contain websites which have already been discussed or evaluated. When sending a search query to a hybrid engine, the sites already evaluated are usually not scanned for matches; the user has to explicitly select them. Whether a site is added to an engine's directory generally depends on a mixture of luck and content quality. Sometimes you may "apply" for a discussion of your website, but there?s no guarantee that it will be done.


Yahoo (www.yahoo.com) and Google (www.google.com), although mentioned here as examples of a directory and crawler respectively, are in fact hybrid engines, as are nowadays most major search machines. As a rule, a hybrid search engine will favor one type of listing over another. For example, Yahoo is more likely to present human-powered listings, while Google prefers its crawled listings.


Meta Search Engines. Another approach to searching the vast Internet is the use of a multi-engine search, or meta-search engine that combines results from a number of search engines at the same time and lays them out in a formatted result page. A common or natural language request is translated to multiple search engines, each directed to find the information the searcher requested. The search engine's responses thus obtained are gathered into a single result list. This search type allows the user to cover a great deal of material in a very efficient way, retaining some tolerance for imprecise search questions or keywords.


Examples of multi-engines are MetaCrawler (http://www.metacrawler.com) and DogPile (http://www.dogpile.com). MetaCrawler refers your search to seven of the most popular search engines (including AltaVista and Lycos), then compiles and ranks the results for you.


Pay-for-performance and paid inclusion engines. As is clear from the title, with these engines you have no way other than to pay a recurring or one-time fee to keep your site either listed, re-spidered, or top-ranked for keywords of your choice. There are very few search engines that solely focus on paid listings. However, most major search engines offer a paid listing option as a part of their indexing and ranking system.


Unlike paid inclusion where you just pay to be included in search results, in an advertising program listings are guaranteed to appear in response to particular search terms, and the higher your bid, the higher your position will be for these terms. Paid placement listings can be purchased from a portal or a search network. Search networks are often set up in an auction environment where keywords and phrases are associated with a cost-per-click (CPC) fee. Such a scheme is referred to as Pay-Per-Click (PPC). Yahoo and Google are the largest paid listing providers, and Windows Live Search (formerly MSN) also sells paid placement listings.


So here's what you should remember from this lesson:

  1. Search engines (SEs) are classified into crawlers, directories, META engines and paid-inclusion engines.
  2. Crawler-based SEs use software called robots, spiders, or crawlers to add new pages to its database which is called an index. Directories use humans to manually fill their databases.
  3. After your site has been included in an index of a crawler-based search engine, you will appear in its results, and your position for a certain search query depends on how relevant the spider finds your page for this query.
  4. Your directory listings are quite influential to your positions in crawling search engines.

Part 1: Understanding Search Engines

This step is an introduction into search engines and the relationships between them. This information is valuable in helping SEOs understand how site rankings in one search engine / directory can influence rankings in other engines.


Throughout this course, (and commonly used on the Internet as well) the term "SE" is used as an abbreviation for "search engine(s)".


Search engines are the most popular method for target customers to find you. As such, SEs are the most vital avenue for letting customers find you.


Currently, search engines around the world together receive around 400,000,000 searches per day. The searches are done with the help of keywords: as a rule, people type a short phrase consisting of two to five keywords to find what they are looking for. It may be information, products, or services.


(For more information on the share of world searches received by major search engines per day, see http://www.freeseosemtraining.com/blog/search-engine-market-share-july-2007.htm).


In response to this query, a search engine will pick from its huge database of Web pages those results it considers relevant for the Web surfer's terms, and display the list of these results to the surfer. The list may be very long and include several million results (remember that nowadays the number of pages on the Web reaches 2.1 trillion, i.e. 2,100,000,000,000); so the results are displayed in order of their relevancy and broken into many pages (most commonly 10 results per page). Most Web surfers rarely go further than the third page of results, unless they are considerably interested in a wide range of materials (e.g. for a scientific research). One reason for this is that they commonly find what they look for on those first pages without the needing to dive in any deeper.


That's why a position among the first 30 results (or "top-30 listing") is a coveted goal.

There used to be a great variety of search engines, but now after major reshuffles and partnerships there are just several giant search monopolies that are most popular among Web surfers and which need to be targeted by optimizers.


There are – and the search engines are aware of this – more popular searches and less popular searches. For instance, a search on the word "myrmecology" is conducted on the Web much more rarely than a search for "Web hosting". Search engines make money by offering special high positions (most often called "sponsored results") for popular terms, ensuring that a site will appear to Web surfers when they search for this term, and that it will have the best visibility. The more popular the term, the more you will have to pay for such a listing.


In this step, we provide insight into how search engines work, how they relate to each other and how it can be of use for you and your Web business.

Stage 1: Search Engine Marketing

Search Marketing has become a buzzword that is now heard all over the place many times a day. Here we provide an exact definition of what it refers to, and how it relates to both Web Search and Web Marketing.


Search Marketing is also known as Search Engine Marketing (SEM), and as such we will refer to is as SEM throughout this course. The definitions that follow are the basics; if you are an expert / advanced Search Marketer, you can skip these terms; otherwise we recommend that you read and understand them.


Search Marketing is a part of business marketing efforts that is aimed at increasing traffic (the number of visitors) to the website from the search engines. Additionally, it addresses conversion (the percent of visitors who become buyers). The first is achieved by increasing search engine visibility, i.e. the position of your site in search engine results for certain keywords that people type in the search box to obtain these results.


For instance, if someone wants to find and buy a digital camera, they will go to a search engine such as Google and type "digital camera" in the search box. Google will list, in this case, 138 million results (these are the real figures extracted while creating this course). If you sell digital cameras or offer any related services, your site may be listed among these 138,000,000 results. Here, everything depends on how deep you are. If you are on the first or second page of the search results, it's more likely that such visibility will bring many visitors and customers from Google. If you are the 300th result, it's unlikely that anyone at all will come to you from Google.


Together with the power and size of your banner / ad network, your affiliations and partnerships, SE visibility comprises a broader concept - Web visibility (aka online visibility).


Generally, there are two main methods of carrying out SEM: a) Search Engine Optimization (SEO) b) using pay-per-click and paid inclusion listing models. They are briefly depicted in the following lessons.


Although paid inclusion and pay-per-click advertising methods seem like the fastest methods to search engine marketing, website owners prefer to adopt a more time consuming search engine optimization method to obtain better marketing of their website on search engines.


Organic rankings are results that you get for free. That is, you create Web copy and publish it, then after a certain period of time a search engine robot finds it (either by itself or as a result of your submission). Finally, the robot reads your content and puts your site into its index. Now your site will be found by this search engine when people query for some words contained within your pages. Obtained this way, your positions in the result list are called your "organic search engine rankings".


Paid listings are different: pay a search engine and it guarantees the inclusion of your site in the index. Moreover, many search engines offer advanced pay-for-performance programs, such as showing your site / ad in the search results for keywords of your choice. These are the so-called "sponsored" results. Most commonly, you will have to pay a specified rate for each visitor that comes to your site from this search engine that clicks on these ads.


Mastering both methods and their proper combination can provide maximum search engine visibility. Because things keep changing, search engine marketers need to devote a good deal of time staying on top of the SEO industry and its trends.


The aim of SEM is not only to find a proper balance between organic and paid listings, but also to achieve maximum conversion of visitors into loyal customers. Nowadays SEM relies on the statement that it's not the traffic itself that matters, but how targeted and convertible it is. The way your traffic converts also matters a lot – even more than your site rank on a search engine. You can rank worse than a competitor and yet the percentage of your visitors that turn into buyers can be high enough to actually outperform a competitor several times over.


The following are the main goals of Search Engine Marketing:

  1. Improve Web visibility and get as much traffic as possible.
  2. Improve traffic quality: get high rankings for exactly those keywords that bring visitors with the best conversion rate.
  3. Decrease expenditures by switching off advertising for underperforming keywords.

Methods used by Search Marketing

The main methods used for achieving the goals of Search marketing are Search Engine Optimization (for organic listings), Bid Management (for paid listings) and Web Analytics (for both types of listings).


Search Engine Optimization (further referred to in this course as SEO) is about changing the HTML code of your pages and the structure of your site in such a way that when an SE robot reads the site, it can understand that the pages have valuable content related to your keywords, and then rank them high. SEO also tells about ways to increase your link popularity - the number of links from other high-ranked pages to your site. This is important because most search engines consider your link popularity a vital ranking factor.


Bid Management is about controlling bids, i.e. the amount of money you spend maintaining your visibility in the sponsored listings. Usually you try to detect the best converting keywords and keyword groups, in order to increase bids on them; as well as decrease or take off bids on keywords that don't break even. Attention also should be paid to leveraging your paid and organic listings, so to spend less on paid advertising campaigns when you get enough traffic from natural results, and invest in paid advertising when an algorithm changes or strong competitors force you out from the top positions in the organic listings.


Web Analytics (further referred to as WA) is about getting, analyzing and using the information about your visitors, their details, their behavior on your site, the ways they have found your site, the efficiency of referrers and advertising, conversion rates, and, together with all that, eCommerce information.


So here's what you should remember from this lesson:

  1. Search Marketing, or Search Engine Marketing, or SEM, is the aggregate of efforts aiming at increasing your search engine visibility.
  2. SEM deals with your organic and paid listings on the search engines.
  3. SEM includes and uses the techniques of Search Engine Optimization, Bid Management and Web Analytics.

Get Acquainted with Search Engine Optimization Tools

If you're promoting a website / business online, you can facilitate your work using special software. As with website maintenance, where you normally use some software to build web pages and to upload them to the hosting server, search engine optimization has its industry tools that take many routines off your shoulders.


There are several typical tasks every search marketer has to deal with on the daily basis. The most popular of them are: researching keywords, optimizing keyword positioning across Web pages, submission to search engines, checking rankings, searching for link partners, launching and controlling PPC campaigns, and many others.

What You Should Know before Getting Started with Online Marketing

The good news is that Internet marketing has rapidly grown and offers a broad array of opportunities for small and large businesses. From the previous introduction we became familiar with the Internet marketing science structure and the particular steps to follow do develop an online business.


At this point, before delving more deeply into the subject, let's define some main terms. Online marketing deals with websites and Web pages, search engines, email and the Internet as the base of the World Wide Web. All of these areas are used to advertise and sell goods and services.


Referencing the original Wikipedia encyclopedia project definitions of our basic terms, they are as follows:

The World Wide Web ("WWW" or simply the "Web") is a system of interlinked, hypertext documents that runs over the Internet.


The hypertext documents, or Web pages, reside on Web servers - special computers that receive requests for Web pages and can "serve" them to the requesting side.


Each Web server, or host, has its unique global address used to find it over the Internet. This address is called an "IP address". A typical IP address looks like four numbers separated by dots. For example, 63.146.123.0 is the address of the server where google.com is situated.


Each server can hold one or many websites. A website (or Web site) is a collection of Web pages, typically bound to a particular domain name or subdomain on the World Wide Web on the Internet. A website is identified uniquely by its domain name, e.g. www.asiawebpro.com.


Domain names are translated into IP addresses by the global DNS - domain name system. That is, when you type www.asiawebpro.com in your browser, the latter first sends a DNS request, and receives the IP address of the server where www.asiawebpro.com is hosted. The browser then connects to that server directly and asks for Web CEO's home page.


Each website is composed of many Web pages. A Web page is a document, typically written in HTML, that is always accessible via HTTP, a protocol that transfers information from the website's server to display in the user's Web browser.


So when a new business is born on the Internet, or when an established brick and mortar enterprise goes online, it starts by creating a website. This is done by purchasing a domain name from an organization accredited to sell domain names, e.g. godaddy.com or register.com. Also, some hosting space must be purchased, so that there's some server ready to store the website. Then, several Web pages are created and linked to each other to make up this website. Finally, the website is linked to the domain name so that everyone can type this domain name in a browser and view these pages.


With a Web browser, a user views Web pages that may contain text, images, and other multimedia and navigates between them using hyperlinks. A client program called a "user agent" retrieves information resources, such as Web pages and other computer files, from Web servers using their URLs.


Most commonly, the user agent is a kind of Web browser: Internet Explorer, Mozilla FireFox, Opera, Netscape, or the program that you use to view this lesson. It retrieves content from remote Web servers and displays it on your computer. You can then follow hyperlinks in each Web page to other World Wide Web resources, whose location (including their domain name) is embedded in the hyperlinks. The act of following hyperlinks from one website to another is referred to as "browsing" or sometimes as "surfing" the Web.


To find a Web page, you can always type its address in the address field of your browser. But what if you don't know the exact address, or want to find all Web pages from different websites on a particular topic?


This is when you use a search engine or directory for the search. There are 3 top search engines: Google, Yahoo! and MSN (Windows Live Search). These search engines are most preferred by Web surfers and every site owner strives to get included in their databases. If people can find your website through search engines, this search engine creates an invaluable source of traffic for you, which translates into income if you sell goods or services.


Actually, search engines are very numerous and differ according to how they work. We have a specific section for "Understanding Search Engines" which includes seven lessons to guide and familiarize you with their core principles.


There is one final point we'll look at, and that is the use of email for your online marketing activity. The topic is email marketing and using this form of communication to send goods and service offers to clients.


Electronic mail (abbreviated "email") is a store and forward method of composing, sending, storing, and receiving messages over electronic communication systems.


Email predates the Internet; existing email systems were a crucial tool in creating the Internet. Email was quickly extended and became additional and highly useful tool for the online marketers.


Email Marketing and other forms of Internet Marketing are depicted in this course to select and master your niche for a future online business.

Introduction into Internet Marketing

Starting from the early 1990s Internet marketing made an amazing development from simple text-based websites that offered product information into highly evolved and complete online businesses promoting and selling their services on the Internet.


Nowadays, the Internet marketing industry has become a complicated and branchy science involving a great deal of theoretical knowledge in combination with applied techniques. As a science, it ranges from browser-side and server-side programming and coding on one end to marketing and economics on the other.


Internet marketing means the use of the Internet to advertise and sell goods and services. It includes Banner and Text Advertising, Email Marketing, Interactive Advertising, Affiliate Marketing and Search Engine Marketing (including Search Engine Optimization and Pay-Per-Click Advertising).


Our first stage of the Internet Marketing course will start with Search Engine Marketing (SEM) as a specific area of an online marketers' business. Mainly, its purpose is to increase targeted traffic from search engines via organic search engine ranking, paid listing and advertising. Here you'll be shown the main principles of Search Engine Optimization (SEO), link building, and paid advertising campaigns.


Actually, every successful search engine optimizer should be aware of top search engine demands and consider them while creating website and improving on-page and off-page factors for Web pages. There are numerous important factors influencing search engine ranking of a Web page. The SEO division of the course provides profound and consecutive lessons depicting each step of your optimization work. Search Engine Advertising is the last topic of the SEM Stage.


There are certain methods that go beyond SEM that can help improve your site online visibility. These include, for instance, creating and spreading a banner / ad network and / or paid link partnerships, as well as email marketing and building affiliate relationships with other websites.


Email marketing is an independent branch which has to be dealt with separately and does not have much in common with SEM. Email marketing is a subject of our next stage and there we will provide insight into the main direct mail campaign steps and guidelines.


Banner networks relate to SEM as long as they touch upon your link popularity (which is a component of SEM).


In the following stage you'll study the Affiliate Marketing division of Internet Marketing. It is a popular method for promoting web businesses when with few marketing dollars marketers can establish a presence and earn a profit recruiting affiliates. Such partner networks can grow with your company business projects and add its profit to your marketing budget.


The most vital stage of the whole course is Web Analytics. Its role can be hardly underestimated as Web Analytics is an essential measure for continually improving web business performance, advertising campaigns, organic search engine results, ranking positions and others. Generally, Web Analytics deals with the traffic already generated at the previous stages. Its primary goal is to improve traffic quality and enhance conversion.


Although it is possible (and advisable) to understand every theoretical aspect of Internet marketing, in practice you may do much better by specialization in a specific area or technique and simply start your Internet Marketing business. Our last stage provides a proper and clear scheme about how to estimate your potential, find a niche, manage projects and promote your services as online marketer, SEO consultant, etc.