Commit c3d29027 authored by Poul-Henning Kamp's avatar Poul-Henning Kamp

Move my rants into their own sandbox



git-svn-id: http://www.varnish-cache.org/svn/trunk/varnish-cache@4790 d4fa192b-c00b-0410-8231-f00ffab90ce4
parent 9de71f91
......@@ -6,10 +6,11 @@
Welcome to Varnish's documentation!
===================================
We are making a fresh start on the documentation, our previous attempts
have utterly failed, but my discovery of Sphinx_/reStructuredText_ as tools
for writing documentation gives me hope that we have finally found a
platform, that we can kick ourselves to actually use.
Arnold's Laws of Documentation:
(1) If it should exist, it doesn't.
(2) If it does exist, it's out of date.
(3) Only documentation for useless programs transcends the
first two laws.
Contents:
......@@ -20,6 +21,7 @@ Contents:
tutorial/index.rst
reference/index.rst
faq/index.rst
phk/index.rst
glossary/index.rst
Indices and tables
......@@ -28,82 +30,3 @@ Indices and tables
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
Why Sphinx_ and reStructuredText_ ?
===================================
The first school of thought on documentation, is the one we subscribe
to in Varnish right now: "Documentation schmocumentation..." It does
not work for anybody.
The second school is the "Write a {La}TeX document" school, where
the documentation is seen as a stand alone product, which is produced
independently. This works great for PDF output, and sucks royally
for HTML and TXT output.
The third school is the "Literate programming" school, which abandons
readability of *both* the program source code *and* the documentation
source, which seems to be one of the best access protections
one can put on the source code of either.
The fourth school is the "DoxyGen" school, which lets a program
collect a mindless list of hyperlinked variable, procedure, class
and filenames, and call that "documentation".
And the fifth school is anything that uses a fileformat that
cannot be put into a version control system, because it is
binary and non-diff'able. It doesn't matter if it is
OpenOffice, LyX or Word, a non-diffable doc source is a no go
with programmers.
Quite frankly, none of these works very well in practice.
One of the very central issues, is that writing documentation must
not become a big and clear context-switch from programming. That
precludes special graphical editors, browser-based (wiki!) formats
etc.
Yes, if you write documentation for half your workday, that works,
but if you write code most of your workday, that does not work.
Trust me on this, I have 25 years of experience avoiding using such
tools.
I found one project which has thought radically about the problem,
and their reasoning is interesting, and quite attractive to me:
#. TXT files are the lingua franca of computers, even if
you are logged with TELNET using IP over Avian Carriers
(Which is more widespread in Norway than you would think)
you can read documentation in a .TXT format.
#. TXT is the most restrictive typographical format, so
rather than trying to neuter a high-level format into .TXT,
it is smarter to make the .TXT the source, and reinterpret
it structurally into the more capable formats.
In other words: we are talking about the ReStructuredText_ of the
Python project, as wrapped by the Sphinx_ project.
Unless there is something I have totally failed to spot, that is
going to be the new documentation platform in Varnish.
Take a peek at the Python docs, and try pressing the "show source"
link at the bottom of the left menu:
(link to random python doc page:)
http://docs.python.org/py3k/reference/expressions.html
Dependency wise, that means you can edit docs with no special
tools, you need python+docutils+sphinx to format HTML and a LaTex
(pdflatex ?) to produce PDFs, something I only expect to happen
on the project server on a regular basis.
I can live with that, I might even rewrite the VCC scripts
from Tcl to Python in that case.
Poul-Henning, 2010-04-11
.. _Sphinx: http://sphinx.pocoo.org/
.. _reStructuredText: http://docutils.sourceforge.net/rst.html
......@@ -98,81 +98,3 @@ And finally, the true test of a brave heart::
make install
.. _SubVersion: http://subversion.tigris.org/
Did you call them *autocrap* tools ?
====================================
Yes, in fact I did, because they are the worst possible non-solution
to a self-inflicted problem.
Back in the 1980'ies, the numerous mini- and micro-computer companies
all jumped on the UNIX band-wagon, because it gave them an operating
system for their hardware, but they also tried to "distinguish" themselves
from the competitors, by "adding value".
That "value" was incompatibility.
You never knew where they put stuff, what arguments the compiler needed
to behave sensibly, or for that matter, if there were a compiler to begin
with.
So some deranged imagination, came up with the idea of the ``configure``
script, which sniffed at your system and set up a ``Makefile`` that would
work.
Writing configure scripts was hard work, for one thing you needed a ton
of different systems to test them on, so copy&paste became the order of
the day.
Then some even more deranged imagination, came up with the idea of
writing a script for writing configure scripts, and in an amazing
and daring attempt at the "all time most deranged" crown, used an
obscure and insufferable macro-processor called ``m4`` for the
implementation.
Now, as it transpires, writing the specification for the configure
producing macros was tedious, so somebody wrote a tool to...
...do you detect the pattern here ?
Now, if the result of all this crap, was that I could write my source-code
and tell a tool where the files were, and not only assume, but actually
*trust* that things would just work out, then I could live with it.
But as it transpires, that is not the case. For one thing, all the
autocrap tools add another layer of version-madness you need to get
right before you can even think about compiling the source code.
Second, it doesn't actually work, you still have to do the hard work
and figure out the right way to explain to the autocrap tools what
you are trying to do and how to do it, only you have to do so in
a language which is used to produce M4 macro invocations etc. etc.
In the meantime, the UNIX diversity has shrunk from 50+ significantly
different dialects to just a handful: Linux, \*BSD, Solaris and AIX
and the autocrap tools have become part of the portability problem,
rather than part of the solution.
Amongst the silly activites of the autocrap generated configure script
in Varnish are:
* Looks for ANSI-C header files (show me a system later
than 1995 without them ?)
* Existence and support for POSIX mandated symlinks, (which
are not used by Varnish btw.)
* Tests, 19 different ways, that the compiler is not a relic from
SYS III days. (Find me just one SYS III running computer with
an ethernet interface ?)
* Checks if the ISO-C and POSIX mandated ``cos()`` function exists
in ``libm`` (No, I have no idea either...)
&c. &c. &c.
Some day when I have the time, I will rip out all the autocrap stuff
and replace it with a 5 line shellscript that calls ``uname -s``.
Poul-Henning, 2010-04-20
.. _phk_autocrap:
====================================
Did you call them *autocrap* tools ?
====================================
Yes, in fact I did, because they are the worst possible non-solution
to a self-inflicted problem.
Back in the 1980'ies, the numerous mini- and micro-computer companies
all jumped on the UNIX band-wagon, because it gave them an operating
system for their hardware, but they also tried to "distinguish" themselves
from the competitors, by "adding value".
That "value" was incompatibility.
You never knew where they put stuff, what arguments the compiler needed
to behave sensibly, or for that matter, if there were a compiler to begin
with.
So some deranged imagination, came up with the idea of the ``configure``
script, which sniffed at your system and set up a ``Makefile`` that would
work.
Writing configure scripts was hard work, for one thing you needed a ton
of different systems to test them on, so copy&paste became the order of
the day.
Then some even more deranged imagination, came up with the idea of
writing a script for writing configure scripts, and in an amazing
and daring attempt at the "all time most deranged" crown, used an
obscure and insufferable macro-processor called ``m4`` for the
implementation.
Now, as it transpires, writing the specification for the configure
producing macros was tedious, so somebody wrote a tool to...
...do you detect the pattern here ?
Now, if the result of all this crap, was that I could write my source-code
and tell a tool where the files were, and not only assume, but actually
*trust* that things would just work out, then I could live with it.
But as it transpires, that is not the case. For one thing, all the
autocrap tools add another layer of version-madness you need to get
right before you can even think about compiling the source code.
Second, it doesn't actually work, you still have to do the hard work
and figure out the right way to explain to the autocrap tools what
you are trying to do and how to do it, only you have to do so in
a language which is used to produce M4 macro invocations etc. etc.
In the meantime, the UNIX diversity has shrunk from 50+ significantly
different dialects to just a handful: Linux, \*BSD, Solaris and AIX
and the autocrap tools have become part of the portability problem,
rather than part of the solution.
Amongst the silly activites of the autocrap generated configure script
in Varnish are:
* Looks for ANSI-C header files (show me a system later
than 1995 without them ?)
* Existence and support for POSIX mandated symlinks, (which
are not used by Varnish btw.)
* Tests, 19 different ways, that the compiler is not a relic from
SYS III days. (Find me just one SYS III running computer with
an ethernet interface ?)
* Checks if the ISO-C and POSIX mandated ``cos()`` function exists
in ``libm`` (No, I have no idea either...)
&c. &c. &c.
Some day when I have the time, I will rip out all the autocrap stuff
and replace it with a 5 line shellscript that calls ``uname -s``.
Poul-Henning, 2010-04-20
.. _phk:
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Poul-Hennings random outbursts
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
You may or may not want to know what Poul-Henning think.
.. toctree::
sphinx.rst
autocrap.rst
.. _phk_sphinx:
===================================
Why Sphinx_ and reStructuredText_ ?
===================================
The first school of thought on documentation, is the one we subscribe
to in Varnish right now: "Documentation schmocumentation..." It does
not work for anybody.
The second school is the "Write a {La}TeX document" school, where
the documentation is seen as a stand alone product, which is produced
independently. This works great for PDF output, and sucks royally
for HTML and TXT output.
The third school is the "Literate programming" school, which abandons
readability of *both* the program source code *and* the documentation
source, which seems to be one of the best access protections
one can put on the source code of either.
The fourth school is the "DoxyGen" school, which lets a program
collect a mindless list of hyperlinked variable, procedure, class
and filenames, and call that "documentation".
And the fifth school is anything that uses a fileformat that
cannot be put into a version control system, because it is
binary and non-diff'able. It doesn't matter if it is
OpenOffice, LyX or Word, a non-diffable doc source is a no go
with programmers.
Quite frankly, none of these works very well in practice.
One of the very central issues, is that writing documentation must
not become a big and clear context-switch from programming. That
precludes special graphical editors, browser-based (wiki!) formats
etc.
Yes, if you write documentation for half your workday, that works,
but if you write code most of your workday, that does not work.
Trust me on this, I have 25 years of experience avoiding using such
tools.
I found one project which has thought radically about the problem,
and their reasoning is interesting, and quite attractive to me:
#. TXT files are the lingua franca of computers, even if
you are logged with TELNET using IP over Avian Carriers
(Which is more widespread in Norway than you would think)
you can read documentation in a .TXT format.
#. TXT is the most restrictive typographical format, so
rather than trying to neuter a high-level format into .TXT,
it is smarter to make the .TXT the source, and reinterpret
it structurally into the more capable formats.
In other words: we are talking about the ReStructuredText_ of the
Python project, as wrapped by the Sphinx_ project.
Unless there is something I have totally failed to spot, that is
going to be the new documentation platform in Varnish.
Take a peek at the Python docs, and try pressing the "show source"
link at the bottom of the left menu:
(link to random python doc page:)
http://docs.python.org/py3k/reference/expressions.html
Dependency wise, that means you can edit docs with no special
tools, you need python+docutils+sphinx to format HTML and a LaTex
(pdflatex ?) to produce PDFs, something I only expect to happen
on the project server on a regular basis.
I can live with that, I might even rewrite the VCC scripts
from Tcl to Python in that case.
Poul-Henning, 2010-04-11
.. _Sphinx: http://sphinx.pocoo.org/
.. _reStructuredText: http://docutils.sourceforge.net/rst.html
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment