. . . . . . . . .


About Me

Testimonials

"I was thinking about you all day today and what a great person you are."

"I wanted to be #1...After 2 months I reached the top position for my most popular keywords."

Read More...

Above The Fold!

We Value Privacy

Interview of Jason Duke, Founder of Strange Logic & Data Gobbler

July 27, 2005

Jason Duke is a long time pal who gives me far more free advice than I deserve. One time, while engaged in a chat session with him, I pinged him to ask if he would mind doing a few questions for an interview.

It's fairly obvious that he is a lover of search, and when Jason gets going he is hard to stop. The interview went like this...

How & when did you get into search?

A user, no more and no less. I started via the old BBS's and one of them had an internet connection and worked as a gateway. I am starting to feel old now.

A long time ago you bought my ebook and gave me some tips, but I had no idea who you were at the time. Later you became rather well known. What made you decide to remain obscure for a while and then later become well known?

I made no conscious decision, but I was happy sitting where I was doing what I was. I guess my exposure increased when I started to comment on papers such as Hilltop etc

PageRank PageRank PageRank. Hyped more than enough because it is easy to talk about, but what other algorithms or ideas do you believe Google layers on over the top of PageRank?

OHHH I could go on for a while, but rather than say specifics such as Hilltop, Local Rank etc, I'd suggest thinking laterally.

A patent may be applied for and granted, a paper may be written, and all of these I believe are clues. I doubt (and no one, absolutely NO ONE) Except the engineers in the specific engine know for sure what the exact algorithms are.

I believe these are clues to what an engine may either be doing now, has been doing or may do in the future. Alternatively, I believe these are clues to what an engine may either be doing now, has been doing, or may do in the future, or they may be put out there for FUD reasons or to pre empt a competitor laying claim to an idea. IE - Restrictive patent application to protect a position and way of working.

To answer the question, I'd come back to the old saying, of "Teach a man to fish.......". If you understand what is happening in search generally, as well as what has happened historically, ask yourself the simple question: What would YOU do to increase relevancy, make the user experience better, or decrease the problems search faces?

Once you have an idea of the way they may be working then you understand the potential problem. Once you understand the problem you have an opportunity of delivering a potential answer. Keep doing that till there are no more problems and suddenly you know what you have to do to rank whenever and however you want.

It may be expensive at times. It may be hard work, but you have answers and that's important!

I remember right around the time when search engines partnered with blog software vendors to do their nofollow idea you offered to help solve the blog spam problems. Why did you do that? Why do you think the search engines were not interested in obtaining any feedback? Has their resolution solved the problems? or does it have other goals?

Nofollow and the way it was implemented in such short time, working together between search companies. software companies and users was a momentous achievement. Just getting those guys around the table to discuss a problem and collectively try to solve a problem was great.

Unfortunately I believe the problem (link spam) was one that was an indirect consequence of the search company's themselves or rather their reliance on links as a measure for ranking. links = ranks

Links were a PITA to get nicely, so getting them legally but in a socially irresponsible manner was easy. The problem became so huge that people moaned and moaned and moaned, and the blog voice was starting to get some prominence, so the engines had to be seen to be doing something.

I did NOT believe that no follow was a viable longer term solution for one simple reason: it required site owner interaction, they HAD to update their software.

The reality is that the majority of blogs and other Content management systems are NOT updated. (and on that note - How many other CMS companies, that were not blog related did you see around the table that day. If a site allows user interaction it is a link spam target. But that's for another discussion:) )

A harder core spammer will look for links that last. A moderated, well managed blog will not keep the links as the owner will delete or manage them personally. It is the old blogs that will be targeted and no follow is not going to be deployed there as the software will not be updated.

The reason I said, "speak to me, I have an answer that will work" is simple: I spam. I am a poacher. I am a link hunter. I hate doing it, but I do it anyway when and where it is needed, where the competition demands it, and where it works.

The answers we had were to deliver an opportunity to remove the social problems of link spam, while still delivering quality in the SERPs and to do it algorithmically, so that it did NOT require site owners and webmasters to change their site, their software or the way they worked it worked on old bogs as well as new blogs etc.

It wouldn't affect my business as the competition and I would still have the same challenges to overcome. The difference would be that link spam wouldn't solve any of those challenges, and the social damage would diminish to a trickle and in time would disappear.

How do you remove the social problem from link spamming?

You either do away with the social groups, get rid of bloggers is one answer, I don't suggest that happens :) or you undertake other methods.

No follow was an attempt but it failed. Even the wikipedia believed in it, then changed their minds, and that was a political sidestep to the question the answer I think you want to hear I am afraid that I can't say at the moment for legal reasons.

That's good legal BTW, not bad legal :)

Does blog spam still work? Does signing guestbooks still work? Does forum spamming still work? If so, how long until they stop?

It all works to some degree or other in certain engines, although it does depend on the vertical market you are trying to optimise for.

Until it stops. Ask the engines ?

Does it work better is shady markets or clean ones? What risks are associated with blog spamming and the like?

Risks are multiple: social risks - you will be annoying people, and you may get pages and sites and IPs and subnets banned or penalised in the engines. There are almost definitely more risks, but the only ones that concern me are the social ones. I don't like death threats at the best of times :)

What socially responsible link building techniques are being underutilized across the board?

Think laterally. Does code give you links? Not in my opinion.

People do.

People make the decision to place a link on their pages to yours. Understand people, understand what makes them tick. Then look to get links through forming relationships. If more people did that then my job would be harder. Thankfully they don't :) overall anyway!

Are there any sites books things you recommend people see or do to better understand social networks and learning how to make people want to link at you? Does looking at search results tell you anything or?

Search results can tell you how other people think. That is not to say you can't think differently and win in the SERPs game and just for the record, when I said social networks I wasn't specifically speaking about sites such as Orkut, 360 etc.

I meant in the good old fashioned sense of REAL life people. Ones you meet on the street, chat to on the phone and maybe even send letters to :)

2 books I recommend: 1 is a sales book and 1 is a book about understanding people.

They are totally different , but equally important. Not only for helping to get links, but for the other part of running a successful online business converting a prospect into a customer.

Because without a sale, a good SERP ranking is nothing but a financial drain. All that traffic and no monetisation means cost.

So you think the real opportunity exists with those individuals who can understand people and bridge the divide between the web and the real world?

Simple answer is YES!

I heard you collect a ton of data?

I've heard similar but I am pretty sure there are search companies that collect more than me - Google and Yahoo for example :)

How do you view collecting data?

It is as important to me as a chisel is to a carpenter. I can't operate effectively without it. I can work without it, but I can't work efficiently. The more efficiently I work, the greater the results I get from every ounce of work that is put in.

What is important to collect?

Let's take a step back. Remember I said above about what the engines may do: look for the problems. It's the problems that are important, as without knowing the problem you'll never find an answer.

Well the data that the engines have that helps them deliver their solution delivers you a challenge. A problem, that problem is how do we overcome the NEW problem? That is in our way the data that is required is the right amount, and type of data to help you overcome that problem at that specific time.

In time you will find that new problems come up, and that to help find the answer you need to refer to data you already have, and slowly but surely you will need less NEW data than previously. When I say new I don't mean fresh. Freshness matters. I mean a new style or type of data.

Can you collect too much data?

Nope! :)

How long does it take you to figure out what they did when search algorithms change? If you are unsure of the change that was made what people do you listen to, trust, and chat with?

Here is a little *not so secret* secret. I try and preempt and understand what the engines may do in the future based on the ongoing research and analysis that we discussed about above.

Every time I do this I see new problems and each time I see a new problem I try to come up with an answer. Some of them are easy, some are nigh on impossible, but in time answers come.

Now when the engines make a change I do not know about it any quicker than anyone else (Search engine engineers excluded). I see the SERPS change the same as anyone else. Then I go through all that work I did and hope the answer is in there. Overall it has been there more often than not.

As to who I talk to: anyone and everyone. I would rather have 1 million different POV than 2, as 2 has a 1 in 2 chance of being right, easier to analyse for sure, but it's still only 2 hypothesis, whereas with 1 million hypothesis there is a much greater chance of the needle being in that haystack somewhere. If it's there I'll find it hopefully.

The forward thinking stuff you were talking about... does the average webmaster need to be exceptionally concerned with that? Or is that more for those in hyper competitive markets or those who primarily rely on selling search services?

If they want to work and dominate in the vertical market they operate in then Yes they should IMO. Running a business online is about RUNNING A BUSINESS. You'd want to know what is happening in your industry offline why is online any different ?

As to hyper competitive marketplaces, it's the same answer. It is simply that there may be more (sometimes less actually) to concern yourself with as to whether they have to. Ultimately that's a decision for the business owner.

It seems content remixing (mixed RSS spam, AdSense scrapers, etc) is becoming far more common and will be a huge problem unless search can make it less financially viable. Does & will it undermine search relevancy? How will they get around it?

Working on your example of scraped (and legitimate) feed re use then the engines already have the answer and have started to increase their knob to deploy it. it's a duplicate content problem. DaveN recently spoke about some of the consequences of this on his blog for example, as to whether it does undermine relevancy then my answer is a resounding YES.

How it gets remedied is a 2 fold question: How do the spammers (me included) remedy it, and how should the engines remedy it.

The engines need to develop (and we all know they already are) a better understanding of lingual complexities: sentence meanings and paragraph meanings rather than simply word association.

We can all put fancy names to it. C indexes LSI etc etc, but ultimately it is to build an artificial intelligence that understands the meaning of a set of words, paragraphs, pages and sites. The engines are moving closer to that nirvana.

As to the spammer, and how he defeats the problem of the engines understanding the meaning well we need to expand the problem a little further. Engines like the content to mean something for the terms but they want it to be unique to you. While the engines are unable to fully understand the true underlying lingual nuances of every language out there (let's not forget that the engines, engineers etc are generally based and operate in english speaking countries and generate most of their revenue from English speaking searchers) then you need to think what they can understand.

Once you've got that sorted you need to think how you may be "knocked down a peg or 2" in an overall score. Duplicate content comes to the fore here. So the answer to the problem is make your content non duplicated make it unique, UNIQIFY IT :)

Do you believe search companies have any moral obligation to help foster an atmosphere where people want to create quality information and it is easy to make information useful?

Moral obligation to the searchers definitely, but they also have an over riding moral obligation to their shareholders.

I believe that the shareholders will win out time and time again. If we accept capitalism (and I do) then the shareholder obligation will always win.

Is Google in error for allowing AdSense to fund so much information pollution?

I stand by my previous answer my lord.

Do they intentionally have low standards because they can sort out AdSense spam sites better than other algorithms do?

I can well believe it, and it would make sound commercial sense if they deployed that information, ensuring you have a market edge over your competition is not an issue for search but for business, as is ensuring revenues and income increase so that the shareholders continue to support the organisation as a whole.

Speaking of business, do you still take on clients? If so, how do you target them and what makes a client appealing as a prospect?

I don't (and haven't for some time) taken on clients in the traditional sense of SEO consultancy. We are in the fortunate position that we changed our business model to incorporate a share in the financial worth of work that we do. It's seen our revenue and profits climb quite well.

We don't go out looking for clients, but do investigate potential partners. We look at their business model, we look at the marketplace, both on and offline, we look at the margins, and maybe, just maybe if we like the look of those numbers and the work involved we partner with them to our mutual success.

When considering a business to partner with, how do you know if it is better to approach a business to partner with or use affiliate programs? Where do you grab your research data to tell you if the business models are decent on / off the web?

I get out a lot of old fashioned tools. Pen and paper, a calculator and a large spreadsheet.

How do you convert that demand into traffic? What keyword tools and ideas do you find especially useful?

I do the numbers :)

Where do you grab your research data to tell you if the business models are decent on / off the web?

I have a fair amount of data here that covers a large set of vertical markets. It helps me decide whether volume is there, user demographic behaviour and other business related information. Like I said before. It's just like a chisel to a carpenter, it doesn't do the work for me. It helps me do the work.

It's the same for keywords etc. I have similar (and sometimes the same) data as everyone else has available to them. I look at the marketplaces, do the hard hours in analysing it and make a decision. Admitidly my DB may be a little larger (OK, a few TB larger) than most, but it is still only a tool.

Can other webmasters wipe you out of search results? If so how common is it and in what fields does it occur?

Yup, you better believe it. I lose rankings regularly. It is as common as muck, but as part of that pro active work I spoke about before generally I don't lose enough to concern me or our clients.

W e may lose position 7 then in the next update 7 will change and I lose position 3 to another site. Quite often I lose positions to myself.

Is it something the average webmaster should be concerned with? What can webmasters do to minimize the risks associated with others hijacking their sites?

In my opinion don't worry. Positions and SERPs change. Move on, get over it, get a new position. There are 10 on page 1, don't stop till you have them all.

Average new webmasters... DaveN stated something about hands in too many pies something gets overcooked. Should new webmasters go for all 10 ... or try to land 1 or 2 good ones?

I agree with Dave. Don't undertake more work than you can handle, but if you can honestly manage it then that's a different thing entirely.....

So if you still want to have as much real estate as possible and want to minimize the risks associated the obvious answer is to have others rank FOR you. How do you do that?

I agree, SEO is about ranking your page, but running a business online is maximising the routes of income. I specialise in SEO, but more importantly I run a business.

If I can't own the page and have every position from 1 - 100 then let's make sure I do my best to ensure that the positions that are there deliver some value (financial or otherwise) to me. Affiliate marketing helps.

If you have to compete with 9 other slots per page wouldn't it be better to have some (all?) of the other slots promoting you, your widget, your service and you at least earn a penny than nothing at all?

That nicely comes onto my controversial thinking about the so called 302 and other hijack issue you asked about before. I think the question was what can webmasters do to minimize the risks associated with others hijacking their sites? Firstly, I want to clarify something: the engines' do what they do how they do it, and me moaning and bitching about it isn't going to change it.

I like driving at 120mph along the M6 toll road (Nice fast clean road here in the UK), but if I get caught I know the police won't agree with my POV. The law says that I should not exceed 70mph. The laws in search are the algos the engines place. They may not be right, they are often far from perfect, but it's the stuff we have to work with.

If the algo says that a page, with a different domain to me deserves to rank higher than me then so be it. If that page redirects via a 302 meta refresh etc, then so be it. Personally I am over the moon normally, as these are NOT hijacks, these are bad algos. The reality is I still get the damn traffic so I am happy :)

The real problem occurs when and if (and there are NOT that many people actually doing this imo) the page cloaks a redirect to me when a spider comes along but serves their own content when a human comes along.

My belief is simple. Deal with it, like you would deal with any other page out there that outranks you.

Are search engines using user feedback to bias search results? How quickly do you see search engines using user feedback in the search results?

I think the greater question is are they using people, and the answer is yes.Many engines are EXTREMELY cash rich at the moment, and people are quite often cheaper than code. If it is in the engines interests to utilise outside contractors to help define relevancy, or if they decide to utilise spam reports etc, then it is simply one more challenge to overcome for those (me included) that want to rank.

If I was an engine I'd welcome the feedback! I am not saying I would trust it implicitly but I would definitely welcome it.

How do you see vertical / personal / mobile search changing the search landscape?

As to the speed of using it: I can say with utmost certainty it is happening now. Do you remember we spoke about understanding people before in the context of link building and sales conversion? Well let's flip the coin for a moment, and place ourselves in the position of the engine. If we lived in utopia then I would like to go to one engine and one engine only. I'd like to search there and know that whatever I search for it knows what context I mean it, and delivers relevant results back to me immediately.

Let's take the word cat for a moment, do I mean:

  • the Unix command
  • am I interested in felines?
  • Or is it a synonym for experlative and I mean something completely different?

All the results, whether they be technical, feline or porn are correct in the general terms, but they are not what I mean.

At the moment the engines will make a decision and show me results they generally feel are relevant, but I did have a choice? I could have gone to a vertically focused engine for the specific contextual meaning I was thinking when I typed cat, but that is a bodge, it is not a true answer.

While the major engines develop, test and deploy this greater understanding of what I PERSONALLY want I feel there is a marketplace for vertical, personal and niche engines, but once they move up a gear do I really want to go to one site to find information on one thing and change every time I want to undertake a different style of search ? No way!!!!

As search algorithms get more linguistically advanced (as you mentioned earlier) will page content become more important? What is the best way to structure a page and a site to do well in linguistically intelligent algorithms?

You know what, my answer is actually simple and please don't be shocked with it. Build content for people not engines, it's the same old thing that has been preached for millennia. Write (or get written, or acquire) unique content then optimise it according to basic and more advanced on page criteria you know what Aaron. People should read your book.

It has the perfect examples of methods of on page optimisation you can find and should always be incorporated into any SEO or other online marketing campaign.

The downsides are that unique content = time = money. There are some answers that aren't so clean. If you don't have the ability to get truly unique content, either by outsourcing or writing it yourself there is LOADS of content out there to be bought, any even more that is free. The downsides are the dupe content problem. Whether it be an Amazon feed or the entire Gutenberg project you can get volumous amounts at no charge and it is all duplicate content.

But if you have rights to the content or the content is free contractually for you to do with as you want then there are software tools ... the so called "Button Pushing".... that helps turn that dupe content into a unique position.

I run one that is free while we test it publicly (If Google can do Beta testing so can I!) at www.widgetbaiting.com. It has a few algorithms in there but ultimately it is a simple process. Dupe content in one end -> non dupe content out of the other. It works with feeds too at widgetbaiting.com/cgi-bin2/index.pl, but consider that Alpha quality.

On top of that think about good old fashioned white hat in general: can the engines find you? Is your site accessible to a spider? www.widgetsitemap.com is another freebie from us while we test it out that works with all the major engines out there, letting them know in a legitimate fashion as your site updates. It's an XML builder for Google, Yahoo and a thousand other engines too, all you have to do is build content that's it in plain simple aspects. Build content, the rest is get links. SEO 101 lesson over :)

SEO 909 is for us all individually to work out through understanding and undertaking the pro active work I spoke about before. Do the pro active work then be reactive, having the answers in front of you when an algo chance occurs.

With your widget sitemap do search engines look for footprints for things like that? Would you recommend using any automated product that could leave footprints or?

Only the engines know for sure, but there is no footprint other than having a feed on your server. That lists URLs on your site. It's to their specs, and is promoted by the engines. We build the Google defined XML for their new sitemap product in testing for Google, and RSS for other engines, and then it pings them: ie -lets them know it is live and has been updated.

-----

Thanks for the interview Jason.

If you would like to learn more about Jason he is a frequent poster at ThreadWatch, has wrote multiple articles about Hilltop & LocalRank, and is the director of Strange Logic, one of the few internet marketing firms I recommend persuing a business relationship with.

- by Aaron Wall, owner of Search Marketing Info

We Value Privacy
This article may be syndicated in whole are part. Simply provide a link back to the original article or http://www.search-marketing.info. Please note that I do not usually update articles over time and the date last modified on article pages is usually referring to a navigational change.

 

Want Free Keyword Research Software? if yes click here

 Got SEO Questions? Get Answers in Minutes NOT Days
Read the SEO Book blog today for the latest SEO tips.
Search the archives for specific posts
You may also want to ask your questions at the SEO Book community forum.

 

- Search Engine Marketing Newsletter | Search Engine Marketing Articles | Professional Search Engine Marketing Services | SEO Tips | SEO Tools | PPC Search Engine Tips -
- Search Engines | Web Directories | Search Term Glossary | Site Map | Search Our Directory | Search The Web | Search Engine Marketing Results | Privacy Policy | Link To Us -

- Site FAQ's | Search Marketing Info © 2004 - 2017 | Contact Us -

Filepath: http://www.search-marketing.info /newsletter/articles/jason-d.htm
Today is 11/20/17 . This file was last modified on 05/11/13