Age | Commit message (Collapse) | Author |
|
* URL has to be defined in a "href" attribute instead of as the nodes content
|
|
* all datasources are generated as namespace-less XML
* the resulting documents have to define the XHTML namespace
** i.e. the article and page contents have to be copied into the XHTML namespace
* implemented XHTML copy helper templates
* modified page templates accordingly
* defined XHTML namespace in the master template
|
|
|
|
* gap between columns was needed because the alignment of all paragraphs was changed to justify
* increased overall browser compatibility of the column layout
** firefox now keeps elements with the column class together if possible
|
|
* XHTML elements "h2" and "h3" are replaced with "h3" and "h4" respectively
** modified all existing contents accordingly
** this was done to avoid the gap between the primary heading and subheadings in the markdown depiction of the contents
* fleshed out the InputXSLT project page with further information
|
|
* modified master transformation accordingly
* fixed small syntax and grammer error in about page content
** missing dot and missing "and"
|
|
* comment paragraphs were neither separated nor justified
* columns on archive page were incororrectly split inside their content in Chromium
** i.e. added "column-break-inside: avoid" property
|
|
|
|
* there is no reason for generating absolute links as the resulting pages will be served on their own domain
|
|
|
|
* Isso improvements
** textarea placeholder font-color
** comment footer font-size
* lines of "pre" elements did not break correctly
* minified all CSS expressions using YUI
|
|
* [isso](http://posativ.org/isso/) is a Disqus like commenting system written in Python
** self-hosted i.e. no privacy implications
** lightweight and provides all the features I require for this blog
* I thought about implementing a commenting system in InputXSLT but sadly I just don't have the time to think of and implement a reasonable XSLT based solution
** maybe a simple REST service for pushing XML from the client into article-dependend comment directories can be implemented in the future
* added basic CSS styling for isso comments simmilar to how they currently look in the old Symphony CMS based blog
|
|
|
|
|
|
|
|
* it was primarily implemented this way to complement the CSS layout of the page
* after trying different approaches it turned out that plain sorting by digest size gives the best results for the contents of my personal page
|
|
* "01_files" contained a single "source" transformation which listed the contents of the "00_content" level
** this was unnecessary as the base "list" transformation already lists the contents of all levels
* added new "expression" mode to datasource meta tag processing in the task processing transformation
** this expression modes allows for the evaluation of arbitrary XPath statements
*** e.g. a query to the results of "list.xsl"
* modified base transformation datasource structures to include the level and meta tree
* modified all existing content transformations to query the level-tree instead of the deprecated "source.xml"
** i.e. XPath statements
** the main change is that directories are available as "directory" nodes instead of nodes named by the directory name
* these changes where implemented to simplify the architecture and to increase flexibility
|
|
* modified all transformations requiring the author name accordingly
|
|
* transformations contain one or more "datasource" meta nodes
** these nodes define the required datasources
** up until now it was required to define the whole path to the file to be loaded
* the implementation of directory linkage in b942f8e removed the underlying need for providing the source / target prefix
** this commit now updates the generation transformations to match this change
*** this simplifies the datasource definition process for the end-user
*** additionally it makes the target / source directories easier to maintain
* changed cleanage task implementation to remove the whole directory and recreate it from scratch
** otherwise directory linkage and in turn the whole generation failed when the target directory did not exist in the first place
* removed task reordering in the process transformation
** tasks are now processed exactly as they were scheduled
** this was changed so that e.g. the "00_content" directory is linked before the first datasource is required
|
|
* fixed prettylist CSS to work in both WebKit and Gecko
|
|
* correct sorting by size requires the "data-type" attribute to be set to "number"
* pages are now first sorted as two halves
** descending / ascending respectively
* the sorted set is then split into actual halves
* the output loop alternates between these halves
* changed test for existance to actual test for existance instead of calculating it by ourselfes
|
|
* random sort order of page-entries on category pages led to unsatisfing results
* the entries are now sorted in an alternating fashion depending on their digest length
* this produces a much more consistent and balanced output
|
|
* "plan.xsl" traverses the file-tree provided by "list.xsl" and determines the tasks to be executed
* "process.xsl" executes the tasks planned by "plan.xsl" in a sensible order
* this change was implemented to be able to e.g. schedule the linkage tasks for last
** performing them in tree-order caused problems when the generator tried to create symlinks inside non-existing directories
** additionally this further modularizes the processing chain
|
|
* this was done to be able to implement directory symlinking
* the generation process is now split into three transformations
** the actual work is performed by "list.xsl" and "traverse.xsl" respecitively
** "make.xsl" wraps these two transformations
*** i.e. generation is now launched by executing "ixslt --transformation make.xsl"
* checked background images into VCS
|
|
* e.g. removing the target directory before each regeneration and symlinking CSS files
** this will be extended to include ressource directories and so on
* renamed "formatter.xsl" stylesheet to "helper.xsl" as it now includes various helper templates
* finally checked the main CSS into the VCS
|
|
* while articles can be ordered by e.g. date there is no useful order for the pages in a given category
** this is why the order of pages on category overview pages is now random (in each generation)
|
|
* expanded "02_data/pages.xsl" transformation to include pages in subfolders
* "03_meta/categories.xsl" transformation generates a categorized view of all pages simmilar to the one provided for tags by "03_meta/tags.xsl"
* "99_result/category/category.xsl" transformation generates category overview pages
* added basic project related pages inside the "projects" category
|
|
* wrap text in "xsl:text elements to clean up the output
|
|
* changed archive page markup two enable setting two columns in CSS
|
|
* the feed should not return all articles ever posted but only e.g. the last five
* disabled indentation to fix source highlighting
|
|
* last article on a page has to contain the CSS classes "last" and "article"
** previously the position had to be manually increased by one because of some whitespace-only nodes
** due to the removal of indentation to enable code highlighting this manual increase is not only unneeded but prevented the correct classes from being set
|
|
* a "previous" link was generated even if the end of the stream was reached
|
|
* disable indentation in both the page and datasource master stylesheets
** indentation was interfering with correct output of formatted code
* simplified call to formatter helper template
|
|
* base url is now a local webserver for more realistic testing
** i.e. otherwise the atom feed is not served correctly
* article, page, tag and stream pages are now generated as "index.html" inside appropriately named directories
** this is needed for pretty urls that actually work
|
|
* all xhtml elements contained a empty xmlns attribute
** this was fixed through a custom "xhtml_copy" mode template
|
|
* atom was choosen in favor of RSS mainly because it is not easily possible to generate the required RSS timestamp in xalan-c (day of the week required)
* modified master template accordingly
|
|
* the output node is defined in both the master and datasource transformation
** i.e. they do not have to be defined in transformations making use of one of these transformations
|
|
* returns "Start" for the first page instead of "Page 0"
** this corresponds to the navigation
* changed "Start" navigation link in master template
* i.e. the index page is named "0", server config will have to be changed accordingly
** thought about generating it as "index" directly
** while this is possible using the XPath evaluation functionality of the target meta attribute the lack of a if-statement in XPath 1.0 would require a very ugly workaround (e.g. the answer to http://stackoverflow.com/questions/971067/is-there-an-if-then-else-statement-in-xpath)
|
|
* stream transformation is iterated over the paginated article datasource implemented in 854eab6
* stream template contains navigation generation for traversing the article stream
|
|
|
|
* displays articles in descending order grouped by year
** based on the article metadata source implemented in adbe381
|
|
* this is needed for the implementation of a article datasource grouped by year
** this in turn is needed for the archive page template
* modified tags meta transformation and article result transformation accordingly
|
|
* xalan and/or InputXSLT namespace should only be included when they are actually required
|
|
* dates in lists such as the tag list are displayed in plain ISO formatting
** this way the article titles all start at the same horizontal offset which I find much more visually pleasing
* dates on article pages are formatted in the English way instead of in English but formatted as in Germany
|
|
* merges the content of the "00_content/meta.xml" file with additional data such as the available tags
* simplifies providing a basic datasource to every result transformation
* modified master, article and tag page template accordingly
|
|
* the master template generates a list of all available tags into the footer
** this currently requires the unaugmented tags datasource to be included into every template making use of the master template
|
|
* added "tags.xsl" meta datasource
** augments tags and their articles extracted from the content tree with article data from the article datasource
* added basic tag page template
* renamed "pages" directory to "page" as it is more intuitive from a user perspective
|
|
* otherwise it is not easily possible to add additional datasource layers between the content and result generation level
* changed meta url appropriately
|