I've suggested this on Wikidot.com forum (http://community.wikidot.com/forum/t-214639/seo:canonical-page-and-noindex-on-lacking-pages):
Wikidot already supports some tags needed for SEO (Search Engine Optimization), but it lacks 2 features that are a "must" for SEO.
(…)
Robots tag = "noindex"
Secondly, Wikidot could use the code bellow on non-existing pages, so that the search engines won't index pages with that default text:
> + The page does not (yet) exist.
>
> The page blablabla you want to access does not exist.
>
> * create pageThis is not only duplicated content, but also hides our real contents behind these "keywords": "page", "exist", "create"… This is specially true on newly created wikis, not fully populated yet.
The code needed on these pages is:
[[code type="HTML"]]
<head>
…
<meta name="robots" content="noindex" />
…
</head>
[[/code]]The same could be done to admin:manage (anyone wants it to appear on Google?).
Doing a quick search on Wikidot.org source files, I think I've found a way of implementing it. Maybe the solution for the robots=noindex on not found pages might be like the following (code starts on line 50 of http://github.com/gabrys/wikidot/blob/master/templates/layouts/WikiLayout.tpl):
php // [...] Line 50: <meta http-equiv="content-type" content="text/html;charset=UTF-8"/> <meta http-equiv="content-language" content="{$site->getLanguage()}"/> // *************** NEW CODE START *************** {if $pageNotExists} <meta name="robots" content="noindex" /> {/if} // **************** NEW CODE END **************** <script type="text/javascript" src="/common--javascript/WIKIDOT.js"></script> // and so on...
In my opinion it seems to be an easy solution for that problem of duplicated content and unwanted search keywords.