Six Principles of How I Write My Journal →

Kids, this is the story of how I write my private journal.

Despite not being a frequent blogger, I am consistent with keeping a private journal that documents my life. The level of depth and introspection achievable with a private journal is different than what can be expected from a public blog or posts in a social network profile (which show mostly an external view), and over the years I have learned about how greatly it contributed to my decision making process and understanding of self.

There can be several advantages of keeping a private journal:

  • Your writing pad behaves like a psychologist. Even if no one will ever read your journal, just writing about your life puts things in proper perspective. It’s like a Rubber duck debugging.
  • Reading past journal entries teaches you about how much you changed and how you changed. Also, you find out that you have worried to much. Overall it would reduce your stress levels.
  • There are places you have been and people you have met and some decisions you have made that you would not like to forget many years down the road, because these understanding how your personality can thus far at any point in time can be helpful for knowing how it could develop even further.
  • If you ever want to spend countless hours some time in 2030 telling your kids the exact and glorious story of ‘how I met your mother‘ you’d be able to do so.
  • You might improve your writing skills if you don’t have much of them to begin with.
  • It’s your auto-biography material, in case you become that magnificent successful person you plan to be (don’t we all need ambitions).

It took a few years to find the best method to sustain a good private journal. I’d share you with some interdependent principles that I have garnered from the practice:

  1. Keep one entry per month. One month is not too short to make the journal seem too tedious and boring in retrospect, and not too long so you wouldn’t forget the key aspects of how you spend your time, and your reflections over that spent time. Also, a month is the sweet spot that it takes most things to develop in a way that actual description of progress is possible.
  2. Start writing the entry about a week before the month is due, and keep revising it until the ‘deadline’. It is a drafting process which produces a nice and readable entry.
  3. With that final week revisions you should find yourself making plans for the next month, and even writing them down, going back to see what you planned to do the previous month, etc.
  4. Write more than 1,000 words. I usually reach 2,000. Turns out your life is less boring than you thought.
  5. Pick a title for each entry, like a chapter from a book. And add a soundtrack. Just kidding.
  6. Write about everything, encrypt the damn thing, and don’t let anyone read it, including the NSA (I don’t back it up to the internet anywhere conspicuously). Otherwise, you would not write honest entries.

If you do that for 50 years, you would get one hell of a book. That’s the book of your life. And whether you decide to use it exclusively for reminiscing by taking it to your grave, or be careless about it near death and open it up for any interested party, it would be worthwhile.

Fork me on GitHub

A quick and dirty source browsing in Emacs for the local installed Cabal repository

Browsing other people’s code is a necessity, especially in Haskell where one wants to understand how to tweak types presented by a package beyond its normal APIs. Or, for understanding the deep inner workings of a framework.

Luckily, we can employ our Emacs editor to our benefit when trying to browse through an installed Cabal repository.

First, we need to make sure hashtags is installed:

cabal install hasktags

Next, we use a bash script that creates an Emacs ETAGS file of all installed packages sources:

#!/bin/bash
 
mkdir -r ~/.cabal/packages/src 
cd ~/.cabal/packages/src || exit -1
 
chmod a+w -R pkg
rm -rf pkg
mkdir pkg
find .. -name \*.tar.gz | xargs -l1 tar -C pkg --wildcards  \*.hs  -zxf
chmod a-w -R pkg
~/.cabal/bin/hasktags -e pkg

This script keeps an extracted copy of all sources, so we can tag and browse.

Next, we should tell Emacs to use that tag file whenever we load an Haskell source:

(add-hook 'haskell-mode-hook (lambda () 
   (visit-tags-table "~/.cabal/packages/src/TAGS")))

Bonus: If you are uncomfortable using the regular functions for tag search (find-tag, find-tag-other-window), you can use the following function:

(defvar dax-tag-browsing-list '())
 
(defun dax-find-tag ()
  "Find the next definition of the tag already specified, but in
   another window only if we have started browsing tags"
  (interactive)
  (let ((this (current-buffer)))
    (if (memql this dax-tag-browsing-list)
      (progn
          (find-tag (current-word))
          (if (not (eq this (current-buffer)))
              (add-to-list 'dax-tag-browsing-list (current-buffer)))
        )
      (save-selected-window
        (save-excursion
          (find-tag-other-window (current-word))
          (setq dax-tag-browsing-list '())
          (if (not (eq this (current-buffer)))
              (add-to-list 'dax-tag-browsing-list (current-buffer)))
        ))
     )
  )
)

The two key properties that this function has when searching for tags:

  • When you are editing your own code and looking up a tag, it will open up a new window in the current frame but keep the focus in the current one. Most likely you only needed to look at that other code and not edit it, so in that case hit ESC-ESC and resume working.
  • In case you decided to browse deeper, switch to the other window. Now you can go own with the nested tag searching – further tag lookups replace the buffer in the new window, keeping your original editing window as it is.

 

Extending Monads for debugging in Haskell

One of the nice things about Haskell is the ability to extend the class of Monads.

One of the original purposes of Monads was to describe flow while leaving the implementation of the flow to a later stage. This allows to define what happens as a side effect of the computational steps.

For example, let’s say we have a computation that we would like to debug. If we formulate it algebraically, it would be harder to checks the step of the computation in run-time. So naturally we break the computation into statement-like stages, introducing code to trace the intermediate results in between.

However, sometimes we would also like to keep the performance of the ‘untraced’ computation as it was. In C++ we can can use templates in order to instantiate two implemtations of the computation. In C we would probably use macros trickery of some sort along with static inline functions that define to nothing. In Python we would probably use a global debug variable, the __debug__ builtin, or a dedicated logging library.

In Haskell, this can come naturally as an extension of the Monad class, with the advantage of multiple instantation. To illustrate, let’s extend Monad with MonadDebug:

import Control.Monad.Identity (runIdentity, Identity)
 
class Monad m => MonadDebug m where
  logDebug :: String -> m ()
 
instance MonadDebug IO where
  logDebug s = putStrLn s
 
instance MonadDebug Identity where
  logDebug _ = return ()

Those instances make it possible to use class function logDebug directly under IO or under pure computations with the Identity Monad.

Let’s define a sample computation function:

computation :: MonadDebug m => Integer -> Integer -> m Integer
computation x y = do
  let t1 = x * 2 + y
  logDebug $ "Here, t1=" ++ (show t1)
  let t2 = t1 * 3 + x
  logDebug $ "Here, t2=" ++ (show t2)
  let t3 = t2 * 7 - x * x
  logDebug $ "Here, t3=" ++ (show t3)
  return t3

The type signature for computation is optional, and can be inferred by the compiler, simply because we referenced logDebug under our Monad.

Now, let’s try to use it under the two environments. Here’s the code:

main :: IO ()
main = do
  putStrLn "Run computation as pure:"
  let t = runIdentity $ computation 1 2
  putStr "Result: "
  print t
  putStrLn ""
 
  putStrLn "Run computation with impure IO logging stages: "
  t' <- computation 1 2
  putStr "Result: "
  print t'

Let’s try to run it:

# runghc test.hs
Run computation as pure:
Result: 90
 
Run computation with impure IO logging stages: 
Here, t1=4
Here, t2=13
Here, t3=90
Result: 90

The advantage is that the compiler can optimize the pure Identity Monad computation much better compared to the non-pure computation, and we achieve this without using any Haskell constructs that are much sophisticated.

p.s. A novice reader might also be able to devise MonadDebug instances for the various Monad transformer classes under the cases where MonadIO is the underlying Monad.

Tron: Legacy – a Review From the POV of An OS Architect

Attention: Spoilers ahead. If you haven’t watched the Tron movies then perhaps you shouldn’t read this post.

I believe the first time I watched the Tron movie was sometime in the 80′s when I was a young kid before I even knew how to write a single line of code. It didn’t make much sense to me then, and it certainly doesn’t makes sense to me now. The visual effects however were very enticing for a movie at that time. Yesterday I watched the second Tron film – Tron: Legacy, and although they have modernized the visuals to a spectacular degree, the plot is horrid and from a scientific point of view, makes even less sense than the first film.

For a very young kid, who cares about the doggy plot? If these kind of movies inspire young individuals to become programmers and engineers, then it filled its purpose as a form of art. I remember watching the old Tron movie back then only a few years after it came up, not caring about the plot but caring about the idea that computers can be programmed and the wonders we can make of it. I wanted to write computer programs at age 6 even before I’ve had a computer.

However, when you grow up and understand how the world works and how and engineering really comes into play, along with the differences between science and science fiction, then watching this film requires a huge suspension of disbelief. To a degree that makes you laugh.

Here I’ve gather the lists of issues that came to mind while watching Tron: Legacy:

  • So, we have this Sam Flynn kid who is the largest shareholder of a big software company, living in a garage somewhere next to a bridge in an industrial zone. He also likes to pull pranks on the company he owns. I don’t think this idea holds water – really, even Paris Hilton is not that dumb (oh, Paris, if you are reading this, I didn’t mean to offend you, please send me a private E-Mail, thanks).
  • The kid gets a message from his father who disappeared about 20 years ago, shortly after the first film’s plot, so he goes to check the abandoned Flynn arcade shop, that someone seems to be paying its electricity bills all this time . Somehow we assume that he never cared enough to check the old arcade shop in the past. Let’s say that happened to be the case because the kid is stupid (despite being a computer hacker like his father), as I claimed earlier.
  • So there, the kid finds a monitor of a running modern-day Linux system. This can be proved with a frame grab – you can see top running Linux kernel threads and udev. Here we assume that an unattended machine from the 80′s didn’t have any power failures, hardware failures, and it was running an Ubuntu system all that time. Also, it has a unique Xorg setup with a 80′s looking green-on-black interface. Also, it has a about a few GB of RAM. There’s also a contradiction with the printed output in the other window shell, because it seems that the system is not of a Linux type, but some odd Sun Solaris server. What?? Did someone port Ubuntu to Solaris on SPARC architecture? I’m confused. This thing called SolarOS doesn’t even exist.

Terminal of the Flynn UNIX flavor.

  • Anyway, the kid looks at the bash shell history and sees that last commands that his father typed, which include ‘make’ and ‘make install’ of the program that controls the LASER device that disintegrated his body into the machine at the first film. Also, it appears that the father edited a file named last_testament_and_will.txt, contradicting what he said further into the film, that he didn’t know he wouldn’t came back from one of the sessions inside The Grid. We shall leave this plot bug aside.

Every IT guy paused at that frame.

  • Anyway, the stupid kid knowing a LASER device pointing at his back, decides to re-run the LASER control program. Final Last Words? Well, then he is surprised to find himself inside The Grid moments later. Well done, kiddo.
  • Forgetting the ‘fact’ that in movies ‘everything can be done with LASER’, and the kid didn’t burn up on the spot, we are introduced into a virtual reality world with its own odd rules of physics, and in order to please the non-technical viewer each program was ‘humanified’ into a living character. ‘Second Life’ anyone? Except that this virtual world seems to be completely realistic in its version of physics. Of course, the sheer amount of  computing power and data storage needed to provide a whole-world emulation including the AIs that are these program is way beyond our time and certainly beyond whatever was in the 80′s, let’s not forget about it. I mean that even if someone manages to write the code for that system, there’s no hardware in the world that can run it.
  • After wining some games, the kid meets his father, or it seems, a fork() of his father, which seems to never aged. Kudos for the CGI making Jeff Bridges 20 years younger – very nice work. However, I am not pleased with how this went further in the plot.

The CGI makes Jeff Bridges look younger. Amazing!

  • The kid manages to escape thanks to Olivia Wilde‘s character who’s name is not important – let’s call her Olivia. Then, the kid meets his father, who did age organically. Guys, the man was inside a machine for 20 years but he aged organically? Does it mean that the system can emulate organic material over all its biological and chemical complexity? Do you know how much data you need to store for that? Millions above millions above millions above millions … of terabytes? Okay, I understand that you need some way of differencing between the two characters that look like Jeff Bridges. But come on, perhaps it’s possible to emulate the real world that way when the entire planet is just computer hardware (like in the Matrix), but with a old server stoved away inside an old arcade shop? Yikes.
  • Anyway, the old Flynn after the first Tron movie involving the MCP (Master Control Program, or for IBM folks, the Linux port of SUSE – Mini Control Program – hey guys!), forgot to code in the support for User programs reintegrate themselves into the real world, so he cannot simply unplug from The Grid. Instead, they need to reach a physical location in the virtual world named ‘The Portal’ in order to escape.
  • The rest of the film involves stopping CLU from getting out of the Grid, and the heroes trying to get out of the Grid in the process, during which we learn that CLU wanted to create ‘the prefect system’ and that Flynn remarks that the world has some merit in being imperfect, and along with a load of other philosophical mumbles. One could not have not noticed the similarities between CLU’s regime inside the grid and totalitarian 20th Century regimes that tried to create ‘the perfect country’. i.e. pre-1945 Germany.
  • End of the film baffles me, as the Flynn kid manages to get out of the Grid (while his father and CLU die in the process), back-ups the entire grid into a SD card (what??), announces that he decides to be a responsible company shareholder (what???) and the oddest thing – he manage to materialize Olivia Wilde out of the computer with the laser. Yes, that program turned into flesh and blood. Or maybe an Android, Anyway, the kid tells her “I want to show you something”. Here as a guy you start to wonder. He picks her on his bike and then he takes the newly flesh and blood (and probably virgin) Olivia Wilde and shows her… the sunrise. Yes, the sunrise.

In overall I enjoyed this film, despite the many faults. I’d be looking for the next film, mainly because the the director said that they are going to dwell into the relationship of the Flynn kid and the materialized Olivia Wilde program. I guess they will have to change the rating, though.

Success of VM infrastructure explained by historically crippled OS design

During the last decade we have seen the rise of server and desktop virtualization infrastructure as the official and standard means of creating services and resources isolation at both the client and server side. System virtualization provided rigid management of computing resources over standardized PC and server hardware for the first time.

However when one thinks about it from an engineering perspective – why does a full PC system virtualization needed in order to achieve ‘rigid management of computing resources’? Why couldn’t OS designers create those abilities in the first place? Why do we need hyper-visors such as Xen and software such as one provided by VMware in order to attain these abilities?

Short answer is that all major OS writers did not provide the proper infrastructure for running sub-instances of the OS efficiently and managing those instances with Quality of Service and fail-over on a network of hardware. A standard OS only provides the means of running unmigratable processes. A standard OS is there just to provide isolation between processes, scheduling, memory management, hardware abstraction, implementation of file systems and networking protocols.

In the past there were attempts to extend the OS concept in order to accommodate clustering capabilities. Linux serves as a good ground for these kind of experiments. Take for example – MOSIX, which is a project that extends Linux in a sense that one Linux system encompasses more than one PC, allowing for process migration. However process migration on its own is inadequate since a process is just one part that composes ‘service’, and you would actually want to migrate whole services efficiently in a fully isolated manner. Another project is ‘User Mode Linux‘ (often confused with UML which is something else), which allows running the Linux kernel as a Linux process. At some point in time there was a resource project that added suspend/resume support to ‘User Mode Linux’, but except that ability, ‘User Model Linux’ did not gain ground in the virtualization field.
Take another example – Cooperative Linux, written by yours truly. Except for helping lots of Windows users fiddling with Linux on their desktop machines, it did not gain ground in the virtualization industry on servers (even though it also runs on Linux, with the best opportunity for performance). All those projects are just partial hacks in the direction of VM infrastructure and none seem to provide it in a productive sense. All major OS designs are crippled in a sense that you need to virtualized them (i.e. put them in “a box”) in order to attain VM infrastructure capabilities.

Key for VM infrastructure is that you must provide the ability to manage your VMs on a cluster in an enterprise level manner and capabilities. Of course, I don’t blame OS writers for not detecting these needs in the corporate world. Actually, there were a few engineers that did detect it, and even awhile back. IBM already came up with the idea of virtualization awhile back on their mainframes. However, that was the beginning – and later on when traditional OS designs for single, desktop users emerged those features were not needed. Unfortunately those OSes also propagated to the server side, and only few saw the potential to bring the virtualization feature back under the cloth of the ‘full system virtualization’ method, adding a VM infrastructure.

Rethinking the Operating System concept, one would come up with a ‘Super Operating System‘ concept (i.e. SOS, also a funny overlap with the acronym for – “Save Our Souls”), that would define a new Operating System from the ground up that would also conform to all the attributes that are currently provided by standard ‘VM infrastructure’. SOS should be able to run instances of itself, connecting those instances with their dependencies. For example, I would be able to run two instances and connect them with a virtual Ethernet link (of course, I’d need a separate network stack for each instance). It should also make it possible to migrate its sub-instances, and other aspects of current-day’s VM management.

I know that designing a new OS as a standalone mission is somewhat unessential, but I have other ideas to pursue in that venue that would might improve the software world a little, but that’s a topic for another post.