Feedback from LotusScript.doc “beta”-testers

The below summarizes the feedback so far:

  • Support for @author parameter.
  • Support for @updated_by parameter.
  • Support for @version parameter.
  • Support for the $TemplateBuild design element – via Chad Schelfhout.
  • Progress bar to show progress of documentation generation.
  • Sorting – make sure all subs, functions and properties in a class summary is sorted.
  • Fix an issue with malfunctioning internal links if there is a space between the sub/function name and the parameters – e.g. doFoo ().

Some of the above is easy and some require a little more work and thought before implementation. I will post again once the issues has been resolved.

In the meantime be sure to also check the LotusScript.doc website for updates (I have done a RSS feed there as well for the news).

Patch to Pebble submitted

The problem was caused by the madicon RSS Reader (which it turns out is a Lotus Notes application). Madicon apparently supplies a referer of “0” when accessing my RSS feed. The following is from the log (again – RegEx to the rescue):

lekkimworld@splinter:~/blogs/lekkimworld/logs > cat * | egrep " "GET .* - - "0"
<hostname> - - [08/Aug/2005:09:56:30 +0200] "GET /rdf.xml" - - "0" "madicon RSS Reader"
<hostname> - - [08/Aug/2005:09:57:56 +0200] "GET /rss.xml" - - "0" "madicon RSS Reader"
<hostname> - - [08/Aug/2005:11:29:21 +0200] "GET /rss.xml" - - "0" "madicon RSS Reader"
...
...
...
lekkimworld@splinter:~/blogs/lekkimworld/logs >

I created an issue in the Pebble JIRA but as a good Open Source citizen I tried to fix the issue myself. The issue turned out to be quite easy to solve and was caused by a missing equal sign in the code (in my case the getUrl() would return a java.lang.String of “0”).

The code was:
int index = getUrl().indexOf("://");
if (index < -1) {
   return getUrl();
}

But should be:
int index = getUrl().indexOf("://");
if (index <= -1) {
   return getUrl();
}

People programming in Java knows that the indexOf (similar to the LotusScript instr-function) returns -1 if the substring cannot be found so the Exception is thrown since the test for “less than -1” is never true.

Link to issue in Pebble JIRA.

Updates:
Updated on 14 August 2005: Uploaded patch to the JUnit test case as well.

Using Perl for search and replace on terminal-only Linux

When I started this blog I gave my categories the very descriptive id’s of 0, 1, 2 etc. (all categories have a name and an internal id). In hindsight not a very wise decision. The problem with changing it has all along been that I didn’t want to go though all the files manually and change the category id’s (Pebble, being the blogging software I use, stores the posts in XML files).

I know it doesn’t sound like a big deal. Simply stop Tomcat, do a search and replace in the XML-files and I would be laughing. The problem was that the blog is running on a terminal-only Linux box hence no easy GUI for the job and I had never done something like this in a terminal. On Windows I normally use UltraEdit for jobs like this.

I could probably have used the sed/awk programs on Linux but I have never really played around with these and they didn’t look very approachable to me. After the split of the blog and my recent dive into RegEx I thought I’d look into perl and see whether there was an easy way to do. Perl should be very easy to use for jobs like this since it understands RegEx natively.

Well – it was easy and combining the results from the find command with Perl was the solution.

Using find I could write a simple command to give me a recursive list of all the XML-files (posts are stored a <YYYY>/<MM>/<DD> directory structure).

#>find . -name [0-9]*.xml
./2004/11/21/12732398232.xml
./2004/11/23/39283232390.xml
...
...

Using Perl I could write a RegEx to change the text <category>3</category> to <category>mythtv</category> in the file foo.xml in the current directory. It may look a little confusing since the backslashes are necessary to escape the <, > and / characters since they have special meaning in RegEx’s.

perl -pi -e 's/<category>3/</category>mythtv</category>/' foo.xml

Nice. Combining the two and using indirection character ` (“inverted” ping) I could channel the result from the find command to Perl.

perl -pi -e 's/<category>3/</category>mythtv</category>/' `find . -name [0-9]*.xml`

If I had wanted to save the original files as backup copies with the .bak extension I could have changed the command slightly (addition in bold).

perl -pi.bak -e 's/<category>3/</category>mythtv</category>/' `find . -name [0-9]*.xml`

The syntax of the actual Perl RegEx is quite simple and consists of a command, the pattern to use for finding stuff, the pattern for the replacement followed by optional processing instructions.

s/<find pattern>/<replacement pattern - probably using back-references>/g

The ‘s’ at the start is for substitute (i.e. replace) and the optional ‘g’ at the end (I didn’t use it in my command since there is only one category-tag) can be used to do global replacements if you want to replace all occurences in the processed files.

If you want to get going using RegEx I would really suggest the book “Mastering Regular Expressions” from O’Reilly by Jeffrey E. F. Friedl.

Domino server certificate expires after 2 years rather than after 100 years

I was at the site of one of our new customers yesterday and they were having problems with a “strange message”. The message told the users whether they wanted to access the server even though the certificate of the server had expired. This was strange since the customer was a relative new Domino customer.

After recertifying the server and getting people back to work I remembered this technote I read some time back. Since it appeared to be the cause of the problem I thought I would share it.

Technote (#1163626) was posted to the Lotus Domino Support RSS feed in March and informed about a bug where servers registered as “First server” on Domino 6.x/6.5.x is only certified for 2 years instead of the normal 100 years.

This really came back to bite this customer so if you did a new setup on Domino 6.x/6.5.x check the id-file of your servers…

Specified Content-Type not always honored Notes 6.5.x

I normally use a simple document for stylesheets and JavaScript libraries in Notes. The form has a field for the “filename” (e.g. my_lib.js) and a field for the data. Until now I had been using a richtext field called HTML for the contents and changed the Content-Type on the form properties as appropriate (text/javascript, text/css etc.). This was working nicely until I created a page using the Strict XHTML DOC-TYPE.

It became an issue because Firefox requires that the Content-Type is text/css for stylesheets when rendering in Strict mode. Otherwise the stylesheet wont be used. Using the Live HTTP Headers extension for Firefox I saw that Domino kept setting the Content-Type to text/html even though the form properties was set to text/css.

Changing the richtext field name from HTML to Body solved the problem. Apparently Domino will not honor the specified Content-Type when the form has a field called HTML.

One of those things in Domino…

Blog split and lessons about mod_rewrite in Apache

I split the blog using the multi-user feature of Pebble (the blogging software I use). The actual split was quite easy and took about an hour, the main part of the time spent to separate the posts into the blogs where they belonged.

After the split each blog reside in their own directory web-wise, as opposed to being in the root (/) of the server. This changes the URL you use to access the main blog (the one you are reading now). My main concern was therefore to keep existing links working. This was accomplished using mod_rewrite on Apache to make this blog reside at the root as well as in the new directory.

Having never really delt with mod_rewrite I had figure that out first but it turned out to be simpler than what I had expected. I just added a couple of RewriteRules to the virtual host and reloaded Apache. Simple enough (apart from the trial-and-error process of figuring out the quirks with the regular expressions and mod_rewrite flags).

I now use the following RewriteRules:

RewriteEngine    on
RewriteLog       /usr/local/apache2/logs/rewrite.log
RewriteLogLevel  0
RewriteRule      ^/index.jsp                              /index.jsp          [L]
RewriteRule      ^/(.*)(themes/default|.css|.js)        /$1$2               [PT]
RewriteRule      ^/training$                              /training/          [R,L]
RewriteRule      ^/training/(.*)                          /training/$1        [L]
RewriteCond      %{REQUEST_URI}                           !(^/lekkimworld/.*)
RewriteCond      %{REQUEST_URI}                           !(^/(.*).css)
RewriteRule      ^/(.*)                                   /lekkimworld/$1     [PT]

Line 4 is used to make sure I access the overview page for the blogs. Line 8-10 is used to make sure that any request for a page not being explicitly for the /lekkimworld sub-blog is redirected to the lekkimworld blog. There is no need to handle the /training blog as well since RewriteRules are processed in a top-to-bottom approach so any requests for that blog has already been handled.

The mod_rewrite flags on the right [xxx] is used to signal how the RewriteRule should be processed. If no flag is specified the processing will continue down the chain. By adding a [L] flag to a rule, that rule will be the last one processed with the RegExp matches the request. [R] makes Apache send a HTTP code 302 back to the browser for a redirect. [PT] is used to pass the request through to the content handler without any further processing.