Sunday 25 August 2013

12 Tips to Improve Your Google Ranking

 1. Page Titles Define Your Website

Everything you need your potential customer to know about your business needs to be identified in the title and every page title on your website should be meaningful. You'll easily spot a poorly-optimized site where the front page title says "Welcome" or "Home" or simply has no title at all.
Your title should define your website's content and describe your business because this is what will be displayed in Google's search results. Keep it short (50 chars. including spaces) and give careful consideration to the keywords that define your business, describe what it does and where it does it. 


2. One URL to Rule Them All

If your website can be accessed through multiple URLs you'll likely be penalized by Google seo for "doubling up" on content. Choose your main URL and have all others redirect to that address (your web host should be able to help you with this.) This way search engines won't be trying to figure out which is the preferred URL to present when displaying your site in search results. 
This also applies to your website address with the www prefix, which should also be set up as a redirect (or vice versa depending on your preference.)

3. Content is King

Google analyzes the text content of your site to determine how relevant your keywords are. There are many tricks they use including giving more relevance to text in the first half of the page, text inside heading tags and text linked to other pages within the site. Take time to determine the keywords and phrases most important to your website and make good use of them within your meta tags, headings and image tags. 

4. Get to the Point!

The front page of your website is the most important page. It carries everything a search engine needs to determine your site's relevant keywords. If your front page simply says "Welcome to our Website. Click to Enter" you've not only wasted the user's time with a pointless page to click through, but you're creating extra work for Google to then make their way to your real front page and figure out what destination URL is relevant to users. 

5. Don't Be Evil

Google's search algorithms are fairly well-guarded secrets and all the literature available about boosting search rankings are largely speculative. What we do know is that Google's famous motto "Don't Be Evil" is also applied to webmasters and trying to trick Google will result in severe penalties. While this guide is largely a list of Dos, it's also worth mentioning some Don'ts:
  • Don't load up pages with keyword lists that are the same colour as your background
  • Don't deliver different content to Googlebot than you do to regular users
  • Don't load irrelevant keywords in your meta tags
  • Don't load up multiple title tags (called title stacking)
  • Don't keyword-spam your file names
  • Don't list your website URL on link farms
  • Don't pay to get your site listed on high-PageRank sites
Doing any of the above could get your site banned from Google. 

6. Linked Text Counts

When you link to pages within your website the text you link is counted as keywords used to describe that page. Instead of simply using "click here" to links, try adding a more descriptive phrase. 

7. Control How Google Describes Your Site

Meta tags are HTML tags in your web pages that are hidden from the viewer but used by Google to fetch a useful description to display for your site in their search results. Without a meta description Google won't be giving your website the best representation as it tries to put together its own description from content on the site.
It's recommended to give each page on your website its own meta description instead of using the same across all pages, as this will give a relevant description for deeper search results.
Note: Depending on the user's search query, for relevance Google may display snippets of content from your page instead of the page's meta description.
 

8. Flash is Not So Flash

Flash has its uses, but it is also prone to misuse. Many creative types build their entire website in Flash, with no regard for the fact that Google can't extract any text content from Flash. When you present text content in Flash you are reducing a search engine's ability to index keywords for your site, so make sure to only use Flash where absolutely necessary. 

9. A Picture Shouldn't Say a Thousand Words

As with Flash, Google can't extract text content from images. Believe it or not there are still some websites around that use images for their entire text content! Images should never be used in place of text, especially with the flexibility of Style Sheets to present headings in any format required.
When using images for logos, graphs, photos etc. always include a descriptive Alt tag because Google indexes these words, not to mention people who are unable to display (or see) images will appreciate your accommodating them. 

10. Quality Links Count

Google's original algorithm for determining a site's relevance was (and still is) to analyze the volume of incoming links to a site. A site with a lot of incoming links is considered an authority of sorts. If a "high authority" website then links to a "low authority" website then that website's credibility gets a massive lift and increases its rank.
What spawned from this concept was masses of link farm websites that Google eventually found a way to discredit, and in fact, penalise the websites listed on them. There is great value in getting links to your website through link exchanges, but they must be quality links. Using this model it's easy to see how providing expertise content (just like this guide) on your website will attract incoming links and boost your rank.
Note: If you allow users to post comments on your website always ensure that any links posted include rel="nofollow" in the tag so Google doesn't count links as recommended by you.
 

11. A Clear Site is an Organised Site

It's not just users who appreciate clear site navigation, with Google using folder structure to determine what is the most (and least) important content on your website. As an example, a link on your site with 5 sub-folders before the file name will be treated as content with lesser importance than a file with only one.
Google appreciates short and descriptive folder and file names (as will your users) and uses them (along with your Domain) to determine relevant content in search results. 

12. If You Can Measure It, You Can Manage It

Google provide two excellent tools to track your web site performance: Google Analytics, which is a web-based website statistics and traffic tool; and Webmaster Tools, which focuses on your site's performance as it relates to Google, for example, how your keywords rate in search results, how many clicks your site's received etc. It will also help you identify any problems encountered during indexing.

Saturday 13 April 2013

.NET Framework Helps To Find Data More Easily

With few lines of code, you can extract data from text files, including log files, using regular- expression of capture groups. If you have used regular expressions to search for matching text, extracting text using the .NET Framework will be very useful. If you have not worked with regular expressions before, or (like me) you need a reference to remember all the symbols, check out the Microsoft Developer Network’s reference for help.


Finding Matching Lines

Imagine that you need to parse a log file (we’ll use C:\Windows\WgaNotify.log as an example, because it’s present on most computers) and list every file that was successfully copied. The WgaNotify.log file resembles the following:

[WgaNotify.log]
0.109: ========================================================
0.109: 2006/04/27 06:54:09.218 (local)
0.109: Failed To Enable SE_SHUTDOWN_PRIVILEGE
1.359: Starting AnalyzeComponents
1.359: AnalyzePhaseZero used 0 ticks
1.359: No c:\windows\INF\updtblk.inf file.
23.328: Copied file:  C:\WINDOWS\system32\LegitCheckControl.dll
23.578: Copied file (delayed):  C:\WINDOWS\system32\SETE.tmp
25.156:  Return Code = 0
25.156: Starting process:  C:\WINDOWS\system32\wgatray.exe /b

As you can see, two of the lines (shown in bold) contain useful information, and the rest can be ignored. You could use the following console application, which requires the System.IO and System.Text.RegularExpressions name spaces, to display just the lines that contain the phrase “Copied file”:

' Visual Basic
Dim inFile As StreamReader = File.OpenText("C:\Windows\wganotify.log")
Dim inLine As String
' Read each line of the log file
While (inLine = inFile.ReadLine()) IsNot Nothing
    Dim r As New Regex("Copied file")
  
    ' Display the line only if it matches the regular expression
    If r.IsMatch(inLine) Then
        Console.WriteLine(inLine)
    End If
End While
inFile.Close()
// C#
StreamReader inFile = File.OpenText(@"C:\Windows\wganotify.log");
string inLine;
// Read each line of the log file
while ((inLine = inFile.ReadLine()) != null)
{
    Regex r = new Regex(@"Copied file");
    // Display the line only if it matches the regular expression
    if (r.IsMatch(inLine))
        Console.WriteLine(inLine);
}
inFile.Close();

Running this console application would match the lines that contain information about the files copied and display the following:

23.328: Copied file:  C:\WINDOWS\system32\LegitCheckControl.dll
23.578: Copied file (delayed):  C:\WINDOWS\system32\SETE.tmp

Capturing Specific Data

To extract portions of matching lines, specify capture groups by surrounding a portion of your regular expression with parentheses. For example, the regular expression "Copied file:\s*(.*$)" would place everything after the phrase “Copied file:”, followed by white space (the “\s” symbol), into a group. Remember, “.*” matches anything, and “$” matches the end of the line.
To match a pattern and capture a portion of the match, follow these steps:
  1. Create a regular expression, and enclose in parentheses the pattern to be matched. This creates a group.
  2. Create an instance of the System.Text.RegularExpressions.Match class using the static Regex.Match method.
  3. Retrieve the matched data by accessing the elements of the Match.Groups array. The first group is added to the first element, the second group is added to the second element, and so on.
The following example expands on the previous code sample to extract and display the filenames from the WgaNotify.log file:


' Visual Basic
Dim inFile As StreamReader = File.OpenText("C:\Windows\wganotify.log")
Dim inLine As String
' Read each line of the log file
While (inLine = inFile.ReadLine()) IsNot Nothing
    ' Create a regular expression
    Dim r As New Regex("Copied file.*:\s+(.*$)")
   
    ' Display the group only if it matches the regular expression
    If r.IsMatch(inLine) Then
        Dim m As Match = r.Match(inLine)
        Console.WriteLine(m.Groups(1))
    End If
End While
inFile.Close()
// C#
StreamReader inFile = File.OpenText(@"C:\Windows\wganotify.log");
string inLine;
// Read each line of the log file
while ((inLine = inFile.ReadLine()) != null)
{
    // Create a regular expression
    Regex r = new Regex(@"Copied file.*:\s+(.*$)");
    // Display the group only if it matches the regular expression
    if (r.IsMatch(inLine))
    {
        Match m = r.Match(inLine);
        Console.WriteLine(m.Groups[1]);
    }
}
inFile.Close();

This code does a bit better, displaying just the filenames of the copied files:

C:\WINDOWS\system32\LegitCheckControl.dll
C:\WINDOWS\system32\SETE.tmp

Capturing Multiple Groups

You can also separate the folder and filename by matching multiple groups in a single line. The following slightly updated sample creates separate capture groups for the folder name and the filename, and then displays both values. Notice that the regular expression now contains two groups (indicated by two sets of parentheses), and the call to Console.WriteLine now references the first two elements in the Match.Groups array.


' Visual Basic
Dim inFile As StreamReader = File.OpenText("C:\Windows\wganotify.log")
Dim inLine As String
' Read each line of the log file
While (inLine = inFile.ReadLine()) IsNot Nothing
    ' Create a regular expression
    Dim r As New Regex("Copied file.*:\s+(.*\\)(.*$)")
   
    ' Display the line only if it matches the regular expression
    If r.IsMatch(inLine) Then
        Dim m As Match = r.Match(inLine)
        Console.WriteLine("Folder: " + m.Groups(1) + ", File: " + m.Groups(2))
    End If
End While
inFile.Close()
// C#
StreamReader inFile = File.OpenText(@"C:\Windows\wganotify.log");
string inLine;
// Read each line of the log file
while ((inLine = inFile.ReadLine()) != null)
{
    // Create a regular expression
    Regex r = new Regex(@"Copied file.*:\s+(.*\\)(.*$)");
    // Display the line only if it matches the regular expression
    if (r.IsMatch(inLine))
    {
        Match m = r.Match(inLine);
        Console.WriteLine("Folder: " + m.Groups[1] + ", File: " + m.Groups[2]);
    }
}
inFile.Close();

The end result is that the console application captures the folder and filename separately, and outputs the following formatted data:

Folder: C:\WINDOWS\system32\, File: LegitCheckControl.dll
Folder: C:\WINDOWS\system32\, File: SETE.tmp

Using Named Capture Groups

You can make your regular expressions easier to read by naming the capture groups. To name a group, add “?<name>” after the open parenthesis. You can then access the named groups using Match.Groups[“name”]. The following example demonstrates using named groups with the Match.Result method, which allows you to format the results of a regular expression match. It produces exactly the same output as the previous code sample, but the code is easier to read.

' Visual Basic
Dim inFile As StreamReader = File.OpenText("C:\Windows\wganotify.log")
Dim inLine As String
' Read each line of the log file
While (inLine = inFile.ReadLine()) IsNot Nothing
    ' Create a regular expression
    Dim r As New Regex("Copied file.*:\s+(?<folder>.*\\)(?<file>.*$)")
   
    ' Display the line only if it matches the regular expression
    If r.IsMatch(inLine) Then
        Dim m As Match = r.Match(inLine)
        Console.WriteLine(m.Result("Folder: ${folder}, File: ${file}"))
    End If
End While
inFile.Close()
// C#
StreamReader inFile = File.OpenText(@"C:\Windows\wganotify.log");
string inLine;
// Read each line of the log file
while ((inLine = inFile.ReadLine()) != null)
{
    // Create a regular expression
    Regex r = new Regex(@"Copied file.*:\s+(?<folder>.*\\)(?<file>.*$)");
    // Display the line only if it matches the regular expression
    if (r.IsMatch(inLine))
    {
        Match m = r.Match(inLine);
        Console.WriteLine(m.Result("Folder: ${folder}, File: ${file}"));
    }
}
inFile.Close();

The .NET Framework supports using capture groups with regular expressions to extract specific data from log files. Using capture groups, you can parse complex text files and isolate just the information you need. First, create a Regex object (part of the System.Text.RegularExpressions namespace) using a regular expression that includes one or more capture groups in parentheses. Then, call the Regex.Match method to compare the regular expression to the input string. Access your capture groups using the Match.Groups array, or format and output the capture groups by calling Match.Result.

Friday 1 March 2013

HTML5 Benefits for Users and Developers

You may not realise it but you are probably already using a browser that is compatible with HTML 5. It is estimated that about half of all Internet users are already using browsers which are ready for HTML 5. Firefox (version 3.5 and higher), Chrome, Safari and Opera are the most popular of these HTML 5-compatible browsers, though the support is partial at this stage. Microsoft's Internet Explorer 8 also has support for many of HTML5 features.


HTML 5 was developed give developers more flexibility, do more things in-house and enable more exiting and interactive websites and more powerful and efficient applications. It brings HTML up to date in terms of what developers have been struggling to do with HTML4 via plugins etc., including managing data, drawing, video and audio, and to provide websites that deliver what the users want better and faster. It will eventually make it easier for developers to develop cross-browser applications for the Web and portable devices.
 
HTML 5 introduces lots of new tags and enhancements for a wide range of features including form controls, APIs, multimedia, structure, database support and faster processing. Note that the specification is still being worked on and it could be some time before it is completed, completely and universally adopted and recognised as a common standard by all browsers. User will of course have to update their browsers to benefit form the enhanced features.
 
Benefits of HTML5?

A major benefit is better Direct HTML Support for Drawing, Animation, Video and Audio

Drawing, Video and Sound - Web Developers have been increasingly trying to create applications which display fluid animations, stream video, play music and integrate with Social Network sites such as Twitter and Facebook. In most cases they could only provide these things by learning and applying add-on tools included Flex, Flash or Silverlight or building complex javascript tools. This increased the complexity and the time it took to develop the Web Applications. HTML 5 changes this with DOM and HTML support, (without the plugins and 3rd party programs) for video and audio embedding, high-quality drawings, charts and animation and many other types of rich content demanded by users.
  • The new canvas element provides developers with a very powerful and very simple way to using pure Javascript to draw diagrams, graphics and dynamic animations on a web page. A good example is Mozilla’s BeSpin tool that is written in Javascript and HTML 5. Developers can use stand-alone HTML to create sites with interactive pictures, animation, charts and graphs, game components, and whatever else they by directly developing the program code and user interaction.
  • The new HTML 5 video element makes it just as easy to embed video elements on a web page as it has been to embed images using HTML4 and the older HTML standards. Again no plugins, or 3rd party software attachments are required. It includes timed playback and other great new features.
Geolocation
The new HTML 5 geolocation APIs make location, whether generated via GPS or other methods, directly available to any HTML 5-compatible browser-based application. A good example is the Google Latitude for the iPhone. This is a pure Web App not a platform-dependent iPhone application.

Client-side database
HTML 5 provides a new SQL-based database API that can be used for storing data locally, that is client side. You get fully defined and structured database storage. This allows a developer to save structured data client-side using a real SQL database. It is not a permanent database, but enables you to store structured data, temporarily. The data can be accessed to support the web application and it can even be accessed when the client is disconnected for a short period of time. This database can be used to store e-mails or shopping cart items for an online shopping site.

Offline Application Cache
An offline application HTTP cache that can be used to make sure applications are available even when the user is disconnected from their network. All browsers have a cache but they have been very unreliable for delivering whole pages and applications. Mostly the browser would not cache the page properly and so you would be unable to view the page when you disconnected from the Internet. HTML 5 provides a smart solution by allowing a developer can specify the files that the browser should cache while online. So, even if you reload the page from the cache when you are offline, the complete page will still load correctly.

Thread-like Operation
Web workers is a tool for spawning background threads to speed up the browse application processing. The API allows developers to make background workers that run scripts simultaneously to the main page script. This allows for faster thread-like processing with coordination via message-passing mechanisms.

Smarter forms
HTML 5 offers enhanced forms with improvements to text inputs, search boxes and other fields and provides better controls for validating data, focusing, interaction with other page elements on the page and various other improvements.

Sharper focus on Web application Requirements
HTML 5 is aimed at making it easier to build search front-ends, wikis, real-time chat, drag-and-drop tools, discussion boards and many other modern web elements into any site, and have them work more efficiently.


Sunday 9 December 2012

Microsoft kept partners in dark about Surface


Microsoft kept its personal computer partners largely in the dark about its plans to launch a competing tablet computer, with some long-time collaborators learning of the new gadget only days before its unveiling, according to people with knowledge of the matter.

The secrecy that shrouded the Surface tablet risks alienating Microsoft's hardware partners, and marks a departure from the software company's tradition of working closely with hardware companies to test and fine-tune every new product.

It also underscores how Microsoft is starting to take pages from Apple Inc's playbook, keeping its cards close to the vest as it works to reinvent its Windows franchise and jump into the hardware business.

The earliest that Microsoft's personal computing partners were told about the tablet was last Friday, just three days before it was shown to the media at an event in Los Angeles, according to sources in the U.S. and Taiwan technology industry who spoke on condition of anonymity.

Windows chief Steven Sinofsky made a round of telephone calls but gave only the barest details on Friday, neither revealing the name of the gadget nor its specifications, two people close to Microsoft's partners told Reuters.

As such, Microsoft's main partners remained "in wait-and-see" mode and had to monitor the news for details, one of the sources said.

Microsoft Chief Executive Steve Ballmer told reporters the company had informed its largest hardware-manufacturing partners about the tablet. A company spokesman declined to say how much of a heads up the partners were given, or to elaborate further.

Sources at Acer Inc and Asustek Computer Inc, the world's fourth and fifth largest PC makers respectively, said the first they had heard of the new tablets was at Ballmer's news conference on Monday.

"No senior executives heard about the news last week," said an Acer executive, who added they were still seeking details. "We're quite surprised."

Acer shipped nearly 10 percent of PCs in the first quarter of 2012, with Asustek accounting for 6 percent, according to research house IDC.

The Surface marks a major strategic shift for Microsoft, ahead of the expected release of its new Windows 8 operating system by the fourth quarter of this year.

Microsoft is now pitting itself as a direct competitor, breaking with a 37-year old model where it had licensed its software to original equipment manufacturers (OEMs) such as Dell Inc or Hewlett-Packard Co, which made the machines.

Competing head-on with PC makers may damage a relationship that has long dominated the computer world, where nine out of 10 PCs run on Windows. Analysts pointed to similar concerns in the Android smartphone world surrounding Google Inc's decision to buy Motorola.

"The strategy may affect the willingness of device manufacturers to work so closely with Microsoft, as it will now be viewed as a competitor as well as a partner," said Andrew Milroy, vice president of ICT Research for Asia-Pacific at Frost & Sullivan in Singapore.

Driving Microsoft's shift, say analysts, is the growing clout of Apple, whose iPad is threatening the market for notebook computers. Notebooks are still largely a Windows business, but its growth is a fraction of the tablet market.

Sunday 14 October 2012

SEO (Revolution in Internet Marketing)

Every day millions of people use Internet for their businesses. Surveys show that more than 85% of Internet users find new websites by using search engines. The internet has become a highly visible and competitive marketing resource for your business.

We at Permute IT Pvt. Ltd. are a pool of online marketing professionals with immense expertise and experience. Your company can have the best looking website in the world, but if no one can sees it, your efforts are totally vain. Most often, people have interest in what you do and they will use a search engine like Google or Yahoo! to find the most appropriate websites to visit.

Permute IT SEO strategy is the cheapest, fastest and best way for marketing a brand, company or services to generate more business to improve the volume and quality of traffic to a web site. We can optimize our website by organic search or sponsored search.

Studies have shown that organic search listings are clicked on more often than the sponsored or paid search listings–that’s why it is so crucial that companies optimize their websites for the best possible placement. Building a business online is neither easy or quick. It takes time and dedication to generate desire results. You need to monitor your internet marketing strategy to make sure that you are getting the results, that is, you’re able to advertise your product and service to your target customers. This help you to achieve a significant place in the global markets.  A successful Internet marketing strategy will include not only link-building and search engine optimization, but also includes social media networking, keyword analysis, high-quality content creation, web development and web design.

You can use Search engine optimization service for our businesses and organizations to build brand-awareness, promotion of products and services, generate web site and social network channel traffic, viral buzz and incoming links to their web sites. To successfully building an online business you must go through the following points for positioning your online business for many years of success.



  1. Ability to Acknowledge your expertise
  2. Niche your way to riches
  3. Boost sales, repeat business and customer loyalty
  4. Expand customer base and target audience / Increase referrals and word of mouth
  5. Build online relationships
  6. Create several "stay in touch" devices
  7. Don't expect overnight success.

Research says that

More than 93% of consumer’s worldwide use search engines to locate web sites. (Source: Forrester Research)
More than 85% of qualified web traffic is driven through search engines. (Source: User Survey)
Near about 75% of search engine traffic never scroll past the first page of results. (Source: User Survey)

Friday 7 September 2012

Security researchers to present new 'CRIME' attack against SSL/TLS


Two security researchers claim to have developed a new attack that can decrypt session cookies from HTTPS (Hypertext Transfer Protocol Secure) connections.
Websites use session cookies to remember authenticated users. If an attacker gains access to a user's session cookie while the user is still authenticated to a website, the hacker could use it to access the user's account on that website.

HTTPS should prevent this type of session hijacking because it encrypts session cookies while in transit or when stored in the browser. However, the new attack, devised by security researchers Juliano Rizzo and Thai Duong, is able to decrypt them.
Rizzo and Duong dubbed their attack CRIME and plan to present it later this month at the Ekoparty security conference in Buenos Aires, Argentina.
The attack exploits a weakness in a particular feature of the TLS (Transport Layer Security) cryptographic protocol and its predecessor, the SSL (Secure Sockets Layer) protocol, which are used to implement HTTPS.
All SSL and TLS versions are affected and the exploited feature is commonly used in SSL/TLS deployments, Rizzo said Thursday via email. The researcher declined to reveal which feature is vulnerable before the attack's presentation at Ekoparty.
The CRIME attack code, known as an agent, needs to be loaded inside the victim's browser. This can be done either by tricking the victim into visiting a rogue website or, if the attacker has control over the victim's network, by injecting the attack code into an existing HTTP connection.
CRIME doesn't require browser plug-ins to work; JavaScript was used to make it faster, but it could also be implemented without it, Rizzo said.

The attacker must also be able to sniff the victim's HTTPS traffic. This can be done on open wireless networks; on local area networks (LANs), by using techniques such as ARP spoofing; or by gaining control of the victim's home router through a vulnerability or default password.
For the attack to work, both the victim's client and the server hosting the targeted website need to support the vulnerable SSL/TLS feature, Rizzo said.
Rizzo confirmed that the HTTPS implementations on some popular websites are vulnerable to the attack, but declined to name any of them.
CRIME was tested successfully with Mozilla Firefox and Google Chrome. However, other browsers could also be affected, Rizzo said.
Mozilla and Google have already prepared patches that block the attack but they have not yet been released, the researcher said.
Last year at Ekoparty, Rizzo and Duong presented an attack called BEAST (Browser Exploit Against SSL/TLS), which was also capable of decrypting HTTPS session cookies. That attack affected SSL 3.0 and TLS 1.0 when used with certain cipher suites.
Mitigating BEAST involved upgrading to TLS 1.1 or 1.2, the latest versions of the TLS protocol, or prioritizing unaffected RC4-based cipher suites for older versions of the protocol.
However, the mitigation solution doesn't work for CRIME because, in addition to exploiting a feature that is present in all versions of SSL/TLS, the attack is not dependent on a particular cipher, Rizzo said.
According to data from SSL Pulse, a project that monitors SSL/TLS implementations across the Web, 72 percent of the Internet's top 184,000 HTTPS-enabled websites were still vulnerable to the BEAST attack at the beginning of August.


Wednesday 22 August 2012