Top 8 SEO Plugins for WordPress

Building on yesterday’s post about tips to make your blog rank better, here’s the plugins you’ll need to impliment that functionality. If I missed anything or there are better alternatives, please let me know.

  1. Optimal Title make sure your post titles occur at the begining of each permalink page’s title tag.
  2. Related Entries Deep linking baby.
  3. Google Sitemap Generator probably won’t make you rank better, but does provide good info.
  4. Sociable leverage social networks and parasite SEO, in a non-spammy way of course.
  5. yes-WWW add the WWW to all requests (using an SEO friendly 301 redirect) I use no-WWW (see below).
  6. no-WWW you know the drill.
  7. Top Posts intelligent deep linking, baby (thanks Derek).
  8. Permalink Redirect use handy 301’s to prevent clean permalink URLs don’t cause duplicate content issues.

Technorati Tags: , , ,

Read More

New Planet Names

So there’s gonna be some new planets. Check
Keeping true to history and the Skinny J’s pluto is in fact a planet. Check.
Cool new planet names?

Acceptance of the new definition of a planet, however, would mean that three other tiny celestial bodies would have to be welcomed into the solar fold – Ceres, Charon, and UB 313, which has been dubbed Xena, after the television heroine. [Emphasis Mine]

Wow, really? We’re going to name a planet after a television show about a fictitous, ancient (and mostly magic) warrior?

Sweet.

So I propose we call it not Ceres, but Chuck Norris.
And get that motherfucking ‘n’ off that motherfucking Charon.

Coochi Coochi.

Read More

10 Tips To Make Your Blog Rank Better

  1. Do keyword research and know which search terms drive the big traffic
  2. Make sure your target keyword appears once in the post title and at least once in the body of the post
  3. Make sure your post title appears first in the title tag on its permalink page
  4. Make sure your post title appears in a heading tag in the HTML
  5. Link out to other blogs, a lot. Eventually they’ll link back
  6. Redirect all URL requests without WWW to the WWW version of the page (or vice versa)
  7. Use clean URLs
  8. Link back to old posts in the body of new posts
  9. Put links to your most popular posts in your sidebar
  10. Put links to related posts at the end of every post

I expand on these tips and much more in my upcoming ebook: SEO for Bloggers.

Technorati Tags: ,

Read More

Scraping SERPs vs APIs

Yet another great post today, this one from SEOmoz is about the problems and choices presented to SEOs by search engines because of their API and automated scraping policies.

I admit it. SEOmoz is a search engine scraper – we do it for our free public tools, for our internal research and we’ve even considered doing it for clients (though I’m seriously concerned about charging for data that’s obtained outside TOS). Many hundreds of large firms in the search space (including a few that are 10-20X our size) do it, too. Why? Because search engine APIs aren’t accurate.

I’m right there with randfish on this. I’ve developed some tools that scrape Google SERP data and return some awesome stuff, but I’m worried about publishing them for public consumption because of course, scraping is against TOS, and the APIs aren’t accurate. I really wish I could get access to real SERP data without pissing off the big G. I’m pretty sure there is some worry about reverse engineering or something that prevents them from allowing us access to this.
Who knows, maybe I’ll take my chances and release the tools, they are pretty sweet.

Read More

Clickshare by Rank

SEO Black Hat has a great post and tool up today that extrapolated clickshare by ranking percentages from the leaked AOL search data. Clickshare is the percentage of total search volume that clicks on a particular position in the SERPs. According to his data, 41.1% of all searchers click on the first result.

I went ahead and graphed the distribution curve of these numbers, and a modified cumulative percentage curve, and it looks like our much beloved pareto curve. I said “modified” cumulative curve, because the first 10 positions only get a little over 80% of the total traffic, so the cumulative curve is based on only those searchers who click on a result in the top 10.

(the top line is the cumulative curve, and the bottom line the distribution)

The disproportionate amount of traffic the first and second positions get is a little surprising, and certainly makes it clear how important it is to not only be in the top 10, but in the top 2.

Read More

Profitable Search Traffic Threshold

Excel rocks.

I mean yeah its Microsoft and it could be better, but there is tons of mathematical fun to be had with Excel’s formulas and graphs.

Over the past week or so I’ve been working on a table, full of lots of simple calculations designed to help me understand the economic patterns in search marketing. Here’s what I came up with.
First put yourself in the shoes of a merchant, you sell hats and on average you make a 10-dollar profit per sale through your website. You convert about 1% of your visitors to buyers so each ‘average’ visitor is essentially worth 10 cents to you. You want more traffic so you’ve decided to create some new content for your site to build natural SEO traffic. You take the time to write and code the new pages, or you outsource the job, either way lets say each new page of content costs you $100 to create and rank on average (including copy writing, HTML and linking). Each page is targeted at one keyword and your site ranks very well so you generally manage to get about a 1% “clickshare” of the total overall keyword traffic for the keyword you are optimizing (based on something like wordtracker data). You would like to make your investment back in increased sales in a month using the new traffic the pages will generate.

Using these givens there is a formula:
CostPerPage/KeywordPerPage/ProfitPerVisitor/DaysToBreakEven/Clickshare = TrafficThreshold
So for our hypothetical situation that is: 100/1/0.10/30/0.01 or 3333.3.
This means that for you to break even in one month with these cost and profit numbers each keyword you optimize for needs to have at least 3334 total searches per day. This is your traffic threshold.

There are lots of ways to improve this number, optimizing each page for more than one keyword, increasing your profit per visitor (by increasing average profit per sale or conversion rate), being willing to wait a bit longer to make your money back or reducing your cost to create new pages. The numbers will change but the formula remains the same.

But lets look at how each individual factor controls your threshold.
First cost per page, starting at $1000 per page and going down to $10 your profitable traffic threshold per keyword looks like this:

As you increase the number of days you’re willing to wait to recoup your initial investment the graph looks like this:

By optimizing for more keywords per page you can bring down your threshold in this curve

and finnally, by increasing your profit per visitor you do this to the shape:

One initial observation that can be made is that the cheaper you can make your pages the less traffic you need per keyword to justify the expense, its a purely zipf curve, no point of diminishing returns. On the other hand with profit per visitor, keywords per page and days to recoup, there can clearly be seen a point of diminish returns past which improvements to your numbers are no longer low hanging fruit. For each of these three metrics there is a sweet spot where you can minimize your traffic threshold with the least amount of effort.

The point of trying to minimize your traffic threshold is to make your business model nimble enough to squeeze way down into the tail of your niche and take advantage of as much of the available keyword traffic as possible.

Read More

Kottke on What Makes an Idea Viral

The factors that play into the “virality” of an idea, or more specifically a link in this case, is a topic I’ve started to delve into with my What Makes an Idea Viral series. Yesterday, Jason Kottke did a little bit of his own riffing on the Seth Godin post that started my series:

Seth hits the nail right on the head with this. When I’m deciding what links to post here, I’m essentially curating ideas, collecting them to “send” to you (and to myself, in a way). And unconsciously, these seven points factor into my decision on what to post here.

Kottke essentially agrees with Godin in his list of factors. We should all be thinking about these points when creating any content we’d like people to link to.

Read More

PHP Ajax Tail Implimentaion


A while back I linked to a demo of a script I wrote implementing the unix command tail (Like for watching the data being appended to a file), so I could tail a log file. I finally got around to posting the source code.
You’ll need saja the secure ajax for PHP framework and my saja.functions.php file as well as the actual output page, tail.php.
As with most of my code its icky and hackish, but it works. For me at least.

Read More

Wikipedia’s XML Dump, MySQL and PHP

For a corpa to use for an as-yet-unnamed project I’m working on, I’ve been struggling with the unwieldy wikipedia XML dump.

1.4gb of pure XML wikicontent. A huge pain to import however, since SQL dumps are no longer directly released. I had to install mediawiki’s (the software that wikipedia runs on) database structure (in the source code its in maintentance/tables.sql), then run a java program called mwdumper to create an enourmous SQL file. All of that didn’t take very long, what’s taking a while now is actually importing that SQL file.

Read More