Age | Commit message (Collapse) | Author |
|
* this was done to match the recent extraction of the generic static site generation framework into a separate project
|
|
|
|
* still pointed to old Symphony CMS upload location
|
|
* changed page source accordingly
|
|
* added title and type attributes to feed links in ATOM feed
* alternate feed link references the actual website
* added "rel=\"alternate\"" and title attribute to entry links
|
|
|
|
|
|
* there is no reason for embedding raw XHTML if we are able to generate the same in pure Kramdown
** this increases separation between content and presentation
** it will be easier to e.g. replace XHTML output with HTML5 in the future
|
|
* added age and further language information
* fixed grammar problems
* converted image tag to kramdown syntax
|
|
* images were hosted on imgur to mitigate the some of the bandwidth usage of self-hosting
** as the website is now hosted on a virtual server this is no longer needed
|
|
* changed "floor" to "ceiling" to correctly handle uneven article counts
|
|
|
|
* added base link
* added namespace for whole stylesheet intead of in the feed node
* added id to entry nodes
* added closing slash to feed id url
|
|
|
|
|
|
* URL has to be defined in a "href" attribute instead of as the nodes content
|
|
* all datasources are generated as namespace-less XML
* the resulting documents have to define the XHTML namespace
** i.e. the article and page contents have to be copied into the XHTML namespace
* implemented XHTML copy helper templates
* modified page templates accordingly
* defined XHTML namespace in the master template
|
|
|
|
|
|
* gap between columns was needed because the alignment of all paragraphs was changed to justify
* increased overall browser compatibility of the column layout
** firefox now keeps elements with the column class together if possible
|
|
* this is currently done via a small Python script that fetches the timeline from Twitter and serializes it as XML
|
|
|
|
* this only applies for the "source/00_content" subtree
** the actual static site generation implementation is explicitly NOT licensed under the terms of the CC-BY-SA license
|
|
|
|
|
|
* XHTML elements "h2" and "h3" are replaced with "h3" and "h4" respectively
** modified all existing contents accordingly
** this was done to avoid the gap between the primary heading and subheadings in the markdown depiction of the contents
* fleshed out the InputXSLT project page with further information
|
|
* modified master transformation accordingly
* fixed small syntax and grammer error in about page content
** missing dot and missing "and"
|
|
* comment paragraphs were neither separated nor justified
* columns on archive page were incororrectly split inside their content in Chromium
** i.e. added "column-break-inside: avoid" property
|
|
|
|
|
|
* basic legal information is provided in English
** further information is provided in German of a separate page
|
|
* there is no reason for generating absolute links as the resulting pages will be served on their own domain
|
|
|
|
* Isso improvements
** textarea placeholder font-color
** comment footer font-size
* lines of "pre" elements did not break correctly
* minified all CSS expressions using YUI
|
|
* [isso](http://posativ.org/isso/) is a Disqus like commenting system written in Python
** self-hosted i.e. no privacy implications
** lightweight and provides all the features I require for this blog
* I thought about implementing a commenting system in InputXSLT but sadly I just don't have the time to think of and implement a reasonable XSLT based solution
** maybe a simple REST service for pushing XML from the client into article-dependend comment directories can be implemented in the future
* added basic CSS styling for isso comments simmilar to how they currently look in the old Symphony CMS based blog
|
|
* _obfuscated_ addresses used "punkt" and "ät" instead of "dot" and "at" to symbolize special characters
* cgit link was missing a closing colon
|
|
|
|
|
|
|
|
* it was primarily implemented this way to complement the CSS layout of the page
* after trying different approaches it turned out that plain sorting by digest size gives the best results for the contents of my personal page
|
|
* "00_content" directory is now explicitly referenced
* added "source_tree" variable to task processing transformation
** changed datasource meta-tag expressions to reference "source_tree" instead of "$root/source"
|
|
* functionality for formatting markdown using kramdown and embellishing the result with e.g. syntax highlighting is required for all content types
|
|
* "01_files" contained a single "source" transformation which listed the contents of the "00_content" level
** this was unnecessary as the base "list" transformation already lists the contents of all levels
* added new "expression" mode to datasource meta tag processing in the task processing transformation
** this expression modes allows for the evaluation of arbitrary XPath statements
*** e.g. a query to the results of "list.xsl"
* modified base transformation datasource structures to include the level and meta tree
* modified all existing content transformations to query the level-tree instead of the deprecated "source.xml"
** i.e. XPath statements
** the main change is that directories are available as "directory" nodes instead of nodes named by the directory name
* these changes where implemented to simplify the architecture and to increase flexibility
|
|
* modified all transformations requiring the author name accordingly
|
|
* the article "2014-07-11_mapping_arrays_using_tuples_in_cpp11" contained a full link to blog.kummerlaender.eu instead of an relative one
* the page "input_xslt" contained a wrong cgit link
|
|
* transformations contain one or more "datasource" meta nodes
** these nodes define the required datasources
** up until now it was required to define the whole path to the file to be loaded
* the implementation of directory linkage in b942f8e removed the underlying need for providing the source / target prefix
** this commit now updates the generation transformations to match this change
*** this simplifies the datasource definition process for the end-user
*** additionally it makes the target / source directories easier to maintain
* changed cleanage task implementation to remove the whole directory and recreate it from scratch
** otherwise directory linkage and in turn the whole generation failed when the target directory did not exist in the first place
* removed task reordering in the process transformation
** tasks are now processed exactly as they were scheduled
** this was changed so that e.g. the "00_content" directory is linked before the first datasource is required
|
|
* fixed prettylist CSS to work in both WebKit and Gecko
|
|
* correct sorting by size requires the "data-type" attribute to be set to "number"
* pages are now first sorted as two halves
** descending / ascending respectively
* the sorted set is then split into actual halves
* the output loop alternates between these halves
* changed test for existance to actual test for existance instead of calculating it by ourselfes
|
|
* random sort order of page-entries on category pages led to unsatisfing results
* the entries are now sorted in an alternating fashion depending on their digest length
* this produces a much more consistent and balanced output
|
|
* "plan.xsl" traverses the file-tree provided by "list.xsl" and determines the tasks to be executed
* "process.xsl" executes the tasks planned by "plan.xsl" in a sensible order
* this change was implemented to be able to e.g. schedule the linkage tasks for last
** performing them in tree-order caused problems when the generator tried to create symlinks inside non-existing directories
** additionally this further modularizes the processing chain
|