Thursday 17 July 2014

Installing Linux from Linux

I often be underwhelmed by Linux's usb disk creator app, both the KDE and GTK versions.
  • It crashes.
  • It requires a FAT32 partition ?!?!
  • It boots to a noddy try me out application.
  • Its an annoyance on the way to installing Linux.
I've done netboot installs, just to avoid the USB creator stuff. Netboot is cool if you have a few PCs to install but a headache if you only have one to do.

This trick I've just discovered saves you from all that, you install Linux from Linux directly, no downloading ISOs nor reboots.  You can install Linux on to any HD that is connected to your current system.

In this case I wanted to install a full Linux system onto a 128GB USB3 drive I'd just purchased for the princley sum of 39€ & 80 cents.

This is the process, in my case, all done from a running Ubuntu system installing Ubuntu.

Create the partitions as you like them

Use whatever tools you prefer.
Personally I like to keep it simple and have everything on one partition.You have so many options for the tools to use to setup partitions in Linux.  Unlike when you install from a pen drive where you get only which one comes with the installer.

Rather than GUI tools I prefer simply...

fdisk /dev/sdb
mkfs.ext4 /dev/sdb1


since this is what I'm familiar with.

Install a base Linux to the partition

mount /dev/sdb1 /mnt/installer
apt-get install debootstrap
debootstrap trusty /mnt/installer


Where "trusty" is a string I had to hunt around the internet to find out.

This page helped http://packages.ubuntu.com/search?keywords=debootstrap

debootstrap installs a minimal Linux to the HD. Just enough for apt-get to be useful. No kernel or bootloader at this stage.  I notice debootstrap what lxc uses to setup a base container.

Create a chroot and use familiar tools to install the system

mount --bind /dev /mnt/installer/dev
mount --bind /dev/pts /mnt/installer/dev/pts
mount -t proc proc /mnt/installer/proc
mount -t sysfs sys /mnt/installer/sys
chroot /mnt/installer



Now your in a chroot where apt-get works and you can setup the system as you like it.  The following for a base Ubuntu desktop.  With this method you can install a simpler system if you prefer.

apt-get update
apt-get install language-pack-en-base
apt-get upgrade
dpkg-reconfigure tzdata
apt-get install grub-pc linux-image
apt-get install ubuntu-desktop ubuntu-standard

N.B. grub pops up a question asking where you want to install it, be careful not to mess with the system you are running on.  The chroot is not a "jail" you can still do damage as root from where you are.

In the chroot you can also add a user (useradd), set the root password (passwd), change hostname (vi /etc/hostname) and/or do whatever setup you feel like before you boot the system's kernel for the first time.

That's it. You now have Linux installed on the drive.  Without having to shutdown the current system.

You can boot from this Disk now or, as in my case unmount the drive and use it to boot a different machine.

This blog post is pretty much copied from halfway down this page.

https://help.ubuntu.com/community/Installation/FromLinux#Without_CD


N.B. after setting up a system, before logging in for the first time. I'd recommend trying to remove all the advertising nonsense from the CLI.

apt-get remove unity-scope-musicstores
apt-get remove unity-scope-video-remote
apt-get remove unity-scope-yelp
apt-get remove unity-scope-home
apt-get remove unity-scope-gdrive


To ditch that Amazon advertising that Ubuntu installs by default is now rather difficult you have to get rid of the whole webapps thing which is a shame.

apt-get remove unity-webapps-service
apt-get remove unity-webapps-qml


And of course you can now triumphantly

apt-get remove usb-creator-gtk





Tuesday 8 July 2014

The Pomodoro Technique is for post WWW kids with attention deficit.

The Pomodoro Technique has gained a fair bit of traction over the years as a way to mark time and get things done.  Its pretty simple and pretty cool because its pretty simple.

However, the recommend time slice of 25 minutes strikes me as very little.

A grown adult getting paid for a living to work at a job ought really to be able to get their head down for an hour to get stuff done. I'm sure I didn't have problem concentrating for an hour when I was kid. We had 3 hour exams, hour long lessons and I've never had a problem sitting though a film.

I reckon its a problem that we consume media in smaller and smaller chunks these days just as Fahrenheit 451 predicted.  I have noticed my attention span becoming a problem.  I used to attribute it to smoking; the nicotine monster kicks in well before an hour is up if you let it.  I presumed that after giving up smoking I was left with this short little span of attention and the rest of my life was going to be hard.

Then I tried the pomodoro technique and set the timer to an hour.  Turned out it was not that hard.  It took 4 or 5 attempts to get comfortable with an hours continuous work.

I've stuck to an hour since and the evils of 451 are disappearing. I can keep my head in a book as long as I like and spend a lot less time on arsebook.  I recommend anyone sold on the pomodoro technique to increase the span as much as they can, or soon enough, Amazon will remote wipe your book collection and that knock on the door will be the fire brigade.



Saturday 21 June 2014

Generation X

I've recently had a lot of XML generating to do in Java code.  A boring job. Its an integration piece between two companies to swap some data.  The XML is as simple as it can be, each data item is needed. The data is fetched from a database, a webservices and the filesystem.  Very little of the XML is static.

The code is not elegant.  Using a templating system like JSP or freemarker would not be a good fit: all the data is pulled from somewhere in Java code.  There would be more code than template.

Writing this code in Java using the standard org.w3c.dom API results in a lot of boilerplate code.  For each new element you have to doc.createElement() and then reference the parent node and insert the element  someElem.appendChild() then set the text content and or attributes.

I've used Dom4j and JDom before and they are nicer APIs but still nothing revolutionary.


I woke up in the middle of the night last night with a really cool fix to this problem in my head.

XPaths are a very expressive way of searching in XML, the idea was to use XPath like expressions for generating new Elements in the XML.


Still not sure about the name for this little library, for now its "XGen" and the paths are "xGenPaths".

The following xGenPath will create the expected XML output.

/html/body/div#container/table.table/tbody/tr[5]

In a oneliner!
   
This is much much more concise than a series of create, append lines like this

Element bodyElem = createElement("body");
htmlElem.appendChild(bodyElement);
Element divElem = createElement("div");
bodyElem.appendChild(divElement);

divElem.setAttribute("id", "container");
...

I set about development, after one false start trying to implement subclass Java's Node and NodeList. I've come up with a very neat little library in less than a day.  All the objects are org.w3c.dom objects and the code is separated to helper functions.
Its very easy to mix and match between w3c APIs and the helper functions.
The ouput is an org.w3c.dom.Document.

Its simple and elegant.

The flow of typical code is like this

XGen xGen = XGenFactory.newInstance();

// New document with an xGenPath to start you off
xGen.newDocument("/html{lang=en}/head/title").setTextContent(TITLE);


// select with XPaths and create with xGenPaths
xGen.select("//head").create("link{rel=stylesheet}")



select() and create() return org.w3c.dom.NodeList instances containing the tail nodes just created or modified, and with a few bells and whistles includeing the ability to mutate all the items in the nodelist with familiar methods.

select("//div").setAttribute("foo", "baz");


creates the foo attribute on all the divs in the document and returns the list of  nodes.

This enables chaining together statements to create elements and attributes and you end up with some very concise code to build up an XML doc.

XGen xGen = XGenFactory.newInstance();
xGen.newDocument("/xml")
    .create("div/ul/li[3]")
    .setTextContent("foo")
    .setAttribute("abc", "123")
    .create("a/span")
    .setAttribute("class", "grey");
xGen.serialize(System.out);




The create method also contains an each() method for generating specific content for each node created with an xGenPath.

final int[]i = new int[1];
XGen xGen = XGenFactory.newInstance();
xGen.newDocument("/xml")
    .create("div/ul/li[3]").setTextContent("a", "b", "c")
    .each(new NodeMutator() {
        public Node each(Node node) {
            ((Element)node).setAttribute("id", "123-" + i[0]++);
            return node;
        }
    });
xGen.serialize(System.out);


which results in

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<xml>
  <div>
    <ul>
      <li id="123-0">a</li>
      <li id="123-1">b</li>
      <li id="123-2">c</li>
    </ul>
  </div>
</xml>


Pretty slick if I do say so myself and crying out for lamda functions.

I'm downloading JDK-8 as we speak, slight worried there is no Icedtea JDK8 yet, but this library is going to be some much smoother with lambdas.


get it here

https://github.com/teknopaul/generation-x




Friday 30 May 2014

Simple Jenkins Job Prioritizing with Local Slaves

If you are familiar with Jenkins the title is probably all you need to read from this post.

Yesterday I hit upon a neat solution to the problem of scheduling many small jobs and a few big jobs on the same Jenkins server.

The problem is as follows...

We use Jenkins for all our build jobs but also for process automation and executing scripts. Jenkins provides a nice generic UI for executing parametrized scripts.

We have one Jenkins master server that compiles code and then installs the code.  This is the only server with root SSH access to all the servers in all our clusters, which it needs to perform installations.

We use the Jenkins master also for automated operations, starting and stopping the systems taking backups etc because the master node has the required root SSH access to all servers.  Our QAs and OPs get a job called START_SYSTEM which they can run and get a green light without having to know the details of which scripts on which hosts need to run to boot the system.

Running more than 2 build jobs on the master node grinds the server to a halt. To prevent too many concurrent jobs we limit the number of executors on master to 2.

The problem arises because if there are 2 compile jobs executing this prevents QA from being able to run operations jobs, the jobs get held in the build queue.


The option of setting up physical slave server is complicated because we would have to add the details of the host to the SSH config of every server in all clusters. Then I hit upon the simple solution of creating a Jenkins slave on the same physical server as the master.


Hence: Simple Jenkins Job Prioritizing with Local Slaves.

Now the master Jenkins node has 2 executors that limit concurrent compilation jobs and we have 20 executors available on the same physical server with the same SSH permissions for smaller jobs.

It was very simple to set up and it seems there is no practical difference running jobs on a local slave to the master executor queue.

For now a "big jobs" queue and a "lightweight jobs" queue is sufficient, I can see in the future we will want to divide up jobs further.  As a side effect of this set up Jenkins admins get a neat way to disable all build jobs without impacting the operations jobs.