1245447680|%e %B %Y
tags: api database dump
Today I worked on some really nice thing in Wikidot, which finally makes initial database dump more organized. Instead of one big single-wiki-dump.sql there are 6 files:
The fact that they are split is a bit artificial, because they are all executed at once, but still it is nice to have 6 shorter files than one long. In case we want to add a new license to the initial dump, we don't need to search and be really careful to not break anything else. We just edit licenses file.
Even more important thing is removing pages from initial dump. As pages are quite a complex thing in Wikidot removing them from SQL dump decreases the size (and complexness) of it drastically. But we still need to populate the first wiki. And this is done with API methods and a nice directory structure:
Site is directory, page is a file in such a directory. A special script (that is called by make) browses the files/dump/sites directory and call page.save API method on each page, supplying the content of the file as the wiki source of the page to be saved. Such a one saving creates many database objects:
- category (if does not exist yet)
So as you see, we benefit in two ways:
- initial database dump is easier to understand and modify
- adding pages to initial dump is much much easier
This still needs some polishing, but is a lot better than before.