The Sweble Wikitext Parser Offer to the Wikipedia Community

Our offer to the Wikimedia Foundation and the Wikipedia (technical) community is this: Come up with a new and better Wikitext and use the Sweble Wikitext parser to convert old Wikipedia content to that new format. Naturally, the new Wikitext format should work well with visual editors etc. We have spent more than one year full-time working on a parser that can handle the complexities of current Wikitext and it does not make sense to us to create another one. You only need one bridge away from the place you don’t want to be any longer (the current “old” Wikitext) to get to a new and happier place.

This entry was posted in Wikitext Parser. Bookmark the permalink.

5 Responses to The Sweble Wikitext Parser Offer to the Wikipedia Community

  1. Waldir says:

    This would be a great opportunity to move into wikicreole (or wikicreole 2.0).

    In fact, even small changes to implement a more strict wikitext with less edge cases would be most welcome. These small changes would likely not even be noticed by most of the editors, and the few who are used to delve deeper into intricate syntax, will easily pick up the new rules anyway.

    On the other hand, even a larger change, such as a move to wikicreole, would not be such a nighmare: webmasters have been adapting to new syntax with every release of (X)HTML, without major problems. But let’s see how the community reacts to this.

  2. dirk says:

    Hi Waldir, I agree, but I also have no strong opinion on a particular wiki markup syntax. I almost don’t care any longer whether it is Wikitext or WikiCreole. Best perhaps to make Wikitext.new and WikiCreole 2.0 the same thing, assuming the broad community can come together on this.

    In the end, I think users will choose the/a winning markup, and that will be based on the capabilities of the underlying wiki software. So the engine with a real parser and advanced capabilities (measured by today’s standard) may well win. Given that all engines are stuck in regular-expression-based parsers, we are just at the start of that race :-)

  3. lambda says:

    I just found your project and now I have hope again!

    approx. 2 years ago I spent an considerable amount of time building a good Wikipedia offline reader, which mimics the online one in features. I used code from the gwtwiki library, which mostly works, but comes short when trying to cope with the indescribable things MediaWiki and Wikipedia does. Localisation was also a big problem to handle. It was just one big mess, and I decided, that I was not willing to write code to handle this, resulting in many ugly quirks in the final rendering.

    I became increasingly frustrated with the idea, that the unbelievably awesome concept and institution of Wikipedia uses this crufty and rotten backend. Especially thinking about the potential of the stored information in Wikipedia.

    Thank you for this. I will try the heck out of it when I get some time on my hands.

    This project means a lot to me, thanks for releasing the source.

  4. OrenBo says:

    Kudos on the parser, but you even based on your own demos it is not mature enough. In the sense that it fails to parse markup in complex pages.

    You need to get it working very close to the current parser before it could be used to convert the format.

  5. Dirk Riehle says:

    Hi OrenBo, thanks for the Kudos!

    There is a subtle but important difference between being able to parse and being able to render. We can parse but we can’t fully render. (Missing “parser functions” etc. i.e. system libraries.) For migrating to a new format, a parser is sufficient.

    When you look at Mediawiki the parser is really intermingled with code from later processing stages like rendering that shouldn’t be part of the parser.

    Dirk

Comments are closed.