Monday, July 30th, 2012
I’m one of those people who think everything can always be a little bit better. Apparently companies aren’t interested in hearing about customer experience, since it’s basically always completely impossible to find a working, decent customer feedback point on any commercial website.
How sad is it that the only way to properly get into contact with a company is via Twitter (which is, of course, limited to 140 chars making it basically impossible to tell them about your issues/problems)? How sad is it that some companies actually artificially limit the number of characters you can enter in a feedback form on their website? Hello! Interwebtube bytes are free! No reason to limited the number of characters to one-thousand characters guys. What’s that? Your time is too valuable to read through entire essays from frustrated consumers? Oh, that’s just fine! I’ll take my business somewhere else, thank you!
If any Quality Assurance Managers are reading this, I’ll make it real easy for you:
- An easy to find, CHEAP/FREE phone number on your site. One I can call for questions, feedback, etc. DO NOT try to sell me shit when I call you with a question or complaint. Just.. don’t. I will take my business somewhere else.
- An easy to find question/feedback email address on your website.
- If you absolute must have a form, make sure it doesn’t ask for my phonenumber, it doesn’t limit the type of question I can ask (try including an “other reason” option?) and I don’t have to jump through hoops to validate my information. I don’t want you to have my address, phone number, email address, or anything else. You don’t ask that information from customers who call you with a question, do you? Then allow – don’t force – me to fill it out on your forms. I just want to let you know that there’s a problem with your website! Today I had to fill out an online form and I had to provide my land-line phone number! “Hello?! 1999 called using their land-line! They want their ancient technology back!” Who still has a land-line, seriously?!
Companies, seriously… why do you make it so exceptionally hard for me to provide you with feedback? I’m trying to help! I want to let you know about broken restitution forms on your website, I want to let you know about why I went to the competitors so you can improve your products. I really do! So stop with the bullshit online questionnaires that pop up on your website when I least expect nor want to see a “Please participate in our questionnaires!” when that’s not why I’m on your site!
Stop wasting money on crappy Quality Assurance Managers. If your website doesn’t have email contact information, someone in your company needs to be fired.
END-OF-RANT
Monday, July 30th, 2012
All us system admins know about nice, which lets you set the CPU priorities of a process. Every now and then I need to run an I/O-heavy process. This inevitably makes the system intermittently unresponsive or slow, which can be a real annoyance.
If your system is slow to respond, you can check to see if I/O is the problem (which it usually will be) using a program called iotop, which is similar to the normal top program except it doesn’t show CPU/Memory but disk reads/writes. You may need to install it first:
# aptitude install iotop
The output looks like this:
Total DISK READ: 0.00 B/s | Total DISK WRITE: 0.00 B/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
12404 be/4 fboender 124.52 K/s 124.52 K/s 0.00 % 99.99 % cp winxp.dev.local.vdi /home/fboender
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0]
As you can see, the copy process with PID 12404 is taking up 99.99% of my I/O, leaving little for the rest of the system.
In recent Linux kernels (2.6.13 with the CFQ io scheduler), there’s an option to renice the I/O of a process. The ionice tool allows you to renice the processes from userland. It comes pre-installed on Debian/Ubuntu machines in the util-linux package. To use it, you must specify a priority scheduling class using the -c option.
- -c0 is an old deprecated value of “None”, which is now the same as Best-Effort (-c2)
- -c1 is Real Time priority, which will give the process the highest I/O priority
- -c2X is Best-Effort priority puts the process in a round-robin queue where it will get a slice of I/O every so often. How much it gets can be specified using the -n option which takes a value from 0 to 7
- -c3 is Idle, which means the process will only get I/O when no other process requires it.
For example, I want a certain process (PID 12404) to only use I/O when no other process requires it, because the task is I/O-heavy, but is not high priority:
# ionice -c3 -p 12404
The effects are noticeable immediately. My system responds faster, there is less jitter on the desktop and the commandline.
Nice.
Thursday, July 26th, 2012
Once you quit Vim, the undo history for that file is gone. This sometimes gives me problems if I accidentally made a change in a file without knowing it. This usually happens due to a bad Vim command which, for instance, capitalized a single letter.
There’s an option which allows you to make the undo history persistent between Vim sessions. That means you can still undo changes you made in a file, even if you’ve quit Vim in the meantime.
You can add the following to your .vimrc to enable it:
set undofile " Maintain undo history between sessions
This will create undo files all over the place, which look like this:
-rw-r--r-- 1 fboender fboender 320 2012-07-26 10:23 bad_gateway.txt
-rw-r--r-- 1 fboender fboender 523 2012-07-24 14:51 .bad_gateway.txt.un~
You can remedy this by including the following option in your configuration:
set undodir=~/.vim/undodir
Make sure to create the undodir:
$ mkdir ~/.vim/undodir
The undo files will now be saved in the undodir:
$ ls -la .vim/undodir/
total 12
drwxr-xr-x 2 fboender fboender 4096 2012-07-26 10:32 .
drwxr-xr-x 12 fboender fboender 4096 2012-07-26 10:24 ..
-rw-r--r-- 1 fboender fboender 519 2012-07-26 10:32 %home%fboender%bad_gateway.txt
Edit: Thanks to Daid Kahl for pointing out the mistake in the comment after the command.
Thursday, July 26th, 2012
My smartphone doesn’t have a data plan because the last thing I want is to be able to check my email and facebook while I’m not behind my PC. I do like to read though, so I want to use my smartphone to read content I’ve previously somehow flagged as interesting.
I’ve tried many apps. Instapaper, Diigo, Readability and a few others. All of them suck.
Some suck because they don’t include inline images from articles in the offline version. Some suck because they’re not free, some suck because the don’t extract just the article text but make the entire webpage available offline, which doesn’t really work on my tiny screen. Others suck because they don’t sync properly.
And then there was Pocket. It includes inline images in the offline version, but it doesn’t include the rest of a webpage. If it can’t reliably detect where the article starts or ends, it makes the entire page available offline. Even on my tiny screen, it still manages to make offline webpages very readable. It’s free, and it has no limits on how many articles you can make available offline. It can also make images and videos available for offline viewing.
Pocket is by far the best option for reading offline content on your smartphone / tablet. Get it here, you won’t be disappointed.
Sunday, June 3rd, 2012
For the longest time, I’ve searched for a way to run terminal emulators in Vim buffers.
As a kind of work-around, I created Bexec, which allows you to run the current contents of a buffer through an external program. It then captures the output and inserts/appends it to another buffer.
Although Bexec works reasonable, and still has it’s uses, it’s not a true terminal emulator in Vim. Today I finally found a Vim plugin that let’s you actually run interactive commands / terminals in Vim buffers: Conque.
It requires Vim with Python support built in. Installation is straight-forward if you’ve got the requirements.
Download the .vmb file, edit it in vim, and issue:
:so %
It will then be installed. Quit vim, restart it, and you can now run start using it:
:ConqueTerm bash
Very awesome.
Tuesday, May 1st, 2012
Google, with their Google Chrome OS, are betting on our computing-experience moving to the Cloud in the future. Some people agree with that prediction. As Hacker News user Wavephorm mentions:
The “All-Web” paradigm is coming, folks. And it really doesn’t matter how much you love your iPhone, or your Android, or Windows phone. Native apps are toast, in the long run. Your data is moving to the cloud — your pictures, your music, your movies, and every document you write. It’s all going up there, and local hard drives will be history within 3 years. And what that means is ALL software is heading there too. Native apps running locally on your computer are going to be thing of the past, and it simply blows my mind that even people here on HackerNews completely fail to understand this fact.
Although I believe many things will be moving to the cloud in the (near) future, I also believe there are still major barriers to be overcome before we can move our entire computing into the cloud. An ‘All-web’ paradigm, where there are NO local apps – where there is NO local persistent storage – is a long, long way off, if not entirely impossible.
The Cloud lacks interoperability
One major thing currently missing from the Cloud is interoperability between Web applications. As mentioned on Hacker News: “local hard drives will be history”. I believe we are greatly underestimating the level of interoperability local storage offers. Name a single native application that can’t load and save files from and to your hard drive? Local storage ties all applications together and allows them to work with each other’s data. I can just as easily open an JPEG in a picture viewer as in a photo editing software package or set it as my background, etcetera.
If the All-web paradigm is to succeed, Web apps will need a way to talk to each other or at the very least talk to some unified storage in the Cloud without the user needing to download and re-upload files each time. Right now, if I want to edit a photo stored in Picasa in a decent image editor, I have to download it from Picasa, upload it to an online image editor, download it from there and upload it again to Picasa (and removing the old photo). I have a pretty decent internet connection, but most of my time will be spent waiting 80 seconds for a 3.5 Mb picture to download, upload, download again, etc.
Perhaps cloud storage providers will start publishing APIs so that other web apps can accesss your files directly, but given that the Web historically has been about being as incompatible as possible with everything else, I believe this will be a very large, if not insurmountable, problem.
User control will be gone
When Google launched the new version of its Gmail interface, many people were annoyed. Many people are annoyed with Facebook’s TimeLine interface. Many of my friends still run ancient versions of WinAmp to play their music, simply because it’s the best music player out there. With the All-web paradigm, choice over which programs you use, and which version you want to use will be gone. The big men in the Cloud will determine what your interface will look like. There will be no running of older versions of programs. Unless web applications find some way to unify storage, (as I mentioned earlier), there will be no way to migrate to another application. At the very least it will be painful.
Cloud storage is expensive
I’m sure we all enjoy our cheap local storage. If I need to temporary store a few hundred gigabytes of data, I don’t even have to think about where or how to store it. My home computer has installs for twelve different Operating Systems through VirtualBox. It takes up about 100 Gb. My collection of rare and local artist’s music is around 15 Gb. Backups of my entire computing history take up about 150 Gb. Where in the cloud am I going to store all of that? Dropbox? It doesn’t even list a price for storage in the Cloud like that! Going from the prices they do list, to replicate my local storage in the Cloud, I’d be paying about $200. A month.
Internet connections are not up to par
We may think our internet connections are fast, and compared to a few years ago they are, but they’re not fast enough by a long shot to do our daily computing in the Cloud. First of all, upstreams are generally much more limited than upstreams. If the All-Web paradigm is going to work, that has to change. But home internet connections aren’t really the problem, I think. The real problem is mobile networks. The All-web paradigm requires being online all the time, everywhere. Lately there’s been a trend (at least in my country) of reducing mobile internet subscriptions from unlimited data plans to very limited plans. A 500 Mb limit per month is not uncommon now. Telco’s reasoning is that they need to recuperate costs for operating the network. Some still offer “unlimited” data plans where, after exceeding your monthly quota, you’ll be put back to 64kb/s. It’s enough to check my email (barely), but it surely isn’t enough to do anyone’s day-to-day computing from the Cloud.
And that’s the situation here, in one of the most well-connected countries in the world. Think of the number of countries that aren’t so fortunate. If nothing else, those countries will keep local computing alive.
Pricing
Most web apps require a monthly subscription to do anything meaningful with them. It could be just me, but I much rather pay a single price up front after which I will be able to use my purchase for as long as I like. With the All-web paradigm, I’d have to pay monthly fees for Google (Documents/storage), Dropbox, Netflix, some music streaming service, a VPS for development, and a lot more.
With the current prices, the monthly costs to me would be unacceptable. It’s a lot cheaper to get a simple $400 desktop computer, which can take care of all those needs. Say I use it for 4 years. That comes down to about $8.50 a month. The cheapest Dropbox account is more expensive than that.
But the high price isn’t really the problem. The problem is continuous payments. Say I lose my job, and I have to cut costs. With local computing, I could say “well, this PC is old, and should be replaced, but since I’m low on money, I’ll keep using it for another year”. Cancelling my subscription to some/all my services means I lose some/all my data. Remember, we’re talking about an All-web environment here. No local storage large enough to store my data. The risks are simply too big.
Privacy
There’s no such thing as privacy in the Cloud. Your personal information and data will be mined, abused and sold. You have no control over it. The more data that is stored, the larger the temptation for companies and criminals to monetize that data. Right now, most people don’t care too much about privacy. We still have a choice about what we put in the cloud and what we keep to ourselves. That picture of your girlfriend in lingerie won’t be ending up on Facebook any time soon, right? With an All-web environment, you’ll have no choice. Want to store or edit a picture? It has to move to the cloud. Even those most unconcerned with privacy won’t accept that.
The best we can hope for would be that web companies will treat our data confidentially. Hope. We have no control. Arguments that companies who abuse our data will soon lose all their users are not relevant. Your data will already be abused by that time. We only need a single incident for people to start distrusting the All-web paradigm. In fact, I think that has already happened.
Conclusion
In the future, many local applications will move to the cloud. In fact, many already have. Music and movie streaming, word processing, image editing, storage; they will move more and more to the Cloud. The All-web paradigm though, will never fly. It would be a huge step back in terms of convenience, cost, privacy and abilities. Local computing is here to stay. It may become more and more of a niche market, but it won’t disappear.
Tuesday, April 24th, 2012
(Please note that this post is not specific to Windows nor Cygwin; it'll work on a remote unix machine just as well)
On my netbook, I use Windows XP in combination with Cygwin (A unix environment for Windows) and Mintty for my Unixy needs. From there, I usually SSH to some unix-like machine somewhere, so I can do systems administration or development.
Unfortunately, the default use of an SSH agent under Cygwin is difficult, since there's no parent process that can run it and put the required information (SSH_AUTH_SOCK) in the environment. On most Linux distribution, the SSH agent is started after you log in to an X11 session, so that every child process (terminals you open, etc) inherits the SSH_AUTH_SOCK environment setting and SSH can contact the ssh-agent to get your keys. Result? You have to start a new SSH agent, load your key and enter your password for each Mintty terminal you open. Quite annoying.
The upside is, it's not very hard to configure your system properly so that you need only one SSH agent running on your system, and thus only have to enter your password once.
The key lies in how ssh-agent creates the environment. When we start ssh-agent in the traditional manner, we do:
$ eval `ssh-agent`
Agent pid 1784
The command starts the SSH agent and sets a bunch of environment variables:
$ set | grep SSH_
SSH_AGENT_PID=1784
SSH_AUTH_SOCK=/tmp/ssh-QzfPveH696/agent.696
The SSH_AUTH_SOCK is how the ssh command knows how to contact the agent. As you can see, the socket filename is generated randomly. That means you can't reuse the socket, since you can't guess the socket filename.
Good thing ssh-agent allows us to specify the socket filename, so we can easily re-use it.
Put the following in your ~/.bashrc:
# If no SSH agent is already running, start one now. Re-use sockets so we never
# have to start more than one session.
export SSH_AUTH_SOCK=/home/fboender/.ssh-socket
ssh-add -l >/dev/null 2>&1
if [ $? = 2 ]; then
# No ssh-agent running
rm -rf $SSH_AUTH_SOCK
# >| allows output redirection to over-write files if no clobber is set
ssh-agent -a $SSH_AUTH_SOCK >| /tmp/.ssh-script
source /tmp/.ssh-script
echo $SSH_AGENT_PID >| ~/.ssh-agent-pid
rm /tmp/.ssh-script
fi
What the script above does is, it sets the socket filename manually to /home/yourusername/.ssh-socket. It then runs ssh-add, which will attempt to connect to the ssh-agent through the socket. If it fails, it means no ssh-agent is running, so we do some cleanup and start one.
Now, all you have to do is start a single terminal, and load your keys once:
$ ssh-add ~/.ssh/fboender\@electricmonk.rsa
Enter passphrase for .ssh/fboender@electricmonk.rsa: [PASSWORD]
Identity added: .ssh/fboender@electricmonk.rsa (.ssh/fboender@electricmonk.rsa)
Now you can start as many new terminals as you'd like, and they'll all use the same ssh-agent, never requiring you to enter your password for that key more than once per boot.
Update:
I've updated the script with suggestions from Anthony Geoghegan. It now also works if noclobber is set.
Monday, April 9th, 2012
While setting up Monit, a tool for easy monitoring of hosts and services, I ran into a problem. I had configured Monit to email alerts to my email address, using my personal mail server (IP/email addresses obfuscated to protect the innocence of my email inbox):
set mailserver 211.211.211.0
set alert ferry.boender@example.com
set httpd port 2812
allow 0.0.0.0/0.0.0.0
check file test path /tmp/monittest
if failed permission 644 then alert
After starting it with monit -c ./monitrc, I could reach the webserver at port 2812. I also saw that the check was failing:
# monit -c ./monitrc status
File 'test'
status Permission failed
monitoring status monitored
permission 600
However, it was not sending me emails with startup, and status error reports. My server’s mail log showed no incoming mail, making it seem like Monit wasn’t even trying to send email. Turning on Monit’s logging feature, I noticed:
# monit -c ./monitrc quit
# monit -c ./monitrc -l ./monit.log
# tail monit.log
error : 'test' permission test failed for /tmp/monittest -- current permission is 0600
error : Sendmail: error receiving data from the mailserver '211.211.211.0' -- Resource temporarily unavailable
I tried a manual connection to the mail server from the host where Monit was running, and it worked just fine.
The problem turned out to be a connection timeout to the mail server. Most mail servers nowadays wait a certain number of seconds before accepting connections. This reduces the rate at which spam can be delivered. Monit wasn’t waiting long enough before determining that the mail server wasn’t working, and bailed out of reporting errors with a ‘Resource temporarily unavailable‘.
The solution is easy. The set mailserver configuration allows you to specify a timeout:
set mailserver 213.19.146.54 with timeout 30 seconds
I’m happy to report that Monit is now sending email alerts just fine.
Tuesday, March 27th, 2012
Some time ago, my mother bought a new laptop. It came preinstalled with Windows Vista, which proved to be quite the disaster. The laptop wasn’t nowhere near fast enough to run it, so I installed Ubuntu on it. This allowed my mom to do everything she needed to do with the laptop, while at the same time making it easy for me to administer the beast.
One day my mom phoned me, and explained about a problem she was having:
“Whenever I move the laptop into the kitchen, it stops working!”
Now my mom is no computer expert, but she picked up Ubuntu quickly and has never needed much hand-holding when it comes to using the laptop. This one, however, sounded to me like one of those situations where the user couldn’t possibly be correct. We went through the basic telephone support routine, but she persisted in her observation that somehow the kitchen was responsible for her laptop misery.
Eventually, after deciding the problem couldn’t be fixed over the phone, I agreed to come over to my parents house the next evening to take a look at it. With my general moody “a family member’s PC needs fixing” attitude and a healthy dose of skepticism (“this is going to be one of those typical the-cable-isn’t-plugged-in problems”), I arrived at my parents.
“Okay, let’s see if we can’t fix this problem”, I said, as I powered up the laptop upstairs. Everything worked fine. Picking up the laptop, I moved it downstairs into the living room. No problems whatsoever. Next, the kitchen. And lo and behold:
The laptop crashed almost immediately.
“Coincidence”, I thought, and tried it again. And again, as soon as I entered the kitchen, the laptop crashed. I… was… Stunned! I had never encountered a problem like this before. What could possibly make it behave like that?
After pondering this strange problem for a while, I thought “what’s the only location-dependent thing in a laptop?”, and it dawned on me that it might just be related to the WiFi. I powered up the laptop once again in the living room, completely turned off the WiFi by rmmod-ing the relevant kernel modules, and entered the kitchen. No crash. It kept on working perfectly. Until I turned on the WiFi again.
With the aid of some log files (which I should have checked in the first place, I admit), I quickly found the culprit. The very last thing I saw in the log files just before the computer crashed… an attempt to discover the neighbors WiFi! A wonky WiFi router in combination with buggy drivers cause the laptop to crash, but only when it came in range of said WiFi router. And that happened only in the kitchen!
In the end I disabled automatic WiFi discovery on the laptop, since my mom didn’t really take it out of the house anyway, and the problems disappeared. I never encountered a problem like that again, but I did learn one thing though:
No matter how impossible the problem may seem… The user isn’t always wrong.
Thursday, February 23rd, 2012
A programmer once built a vast database containing all the literature, facts, figures, and data in the world. Then he built an advanced querying system that linked that knowledge together, allowing him to wander through the database at will. Satisfied and pleased, he sat down before his computer to enjoy the fruits of his labor.
After three minutes, the programmer had a headache. After three hours, the programmer felt ill. After three days, the programmer destroyed his database. When asked why, he replied: “That system put the world at my fingertips. I could go anywhere, see anything. Because I was no longer limited by external conditions, I had no excuse for not knowing everything there is to know. I could neither sleep nor eat. All I could do was wander through the database. Now I can rest.”
— Geoffrey James, Computer Parables: Enlightenment in the Information Age
I was a major content consumer on the Internet. My Google Reader had over 120 feeds in it. It produced more than a 1000 new items every couple of hours. I religiously read Hacker News, Reddit and a variety of other high-volume sources of content. I have directories full of theoretical science papers, articles on a wide range of topics and many, many tech books. I scoured the web for interesting articles to save to my tablet for later reading. I was interested in everything. Programming, Computer Science, Biology, Theoretical Particle Physics, Psychology, rage-comics, and everything else. I could get lost for hours on Wikipedia, jumping from article to article, somehow, without noticing it, ending up at articles titled “Gross–Pitaevskii equation” or “Grand Duchy of Moscow“, when all I needed to know was what the abbreviation “SCPD” stood for. (Which, by the way, Wikipedia doesn’t have an article for, and means “Service Control Point Definition“)
I want to make it clear I wasn’t suffering from Information Overload by any definition. I was learning things. I knew things about technology which I hadn’t even ever used myself. I can tell you some of the ins and outs of iPhone development. I don’t even own an iPhone. I can talk about Distributed Computing, Transactional Memory and why it is and isn’t a good idea, without having written more than a simple producer/consumer routine. I’m even vehemently against writing to shared memory in any situation! I can tell you shit about node.js and certain NoSQL databases without even ever having installed – much less dived into – them. Hell, I don’t even like Javascript!
The things is: even though I was learning about stuff, it was superficial knowledge without context and the kind of basic information that allows you to draw these conclusions you’re reading about for yourself, without the help of some article. I didn’t pause to think about conclusions drawn in an article, or to let the information sink in. I read article after article. I wasn’t putting the acquired knowledge into practice. The Learning Pyramid may have been discredited, but I’m convinced that we learn more from doing than we do from reading about something.
So what makes reading so attractive that we’d rather read about things than actually doing them? And I know for a fact that I’m not alone in having this problem. I think – and this might be entirely personal – it’s because of a couple of reasons.
One is that it’s much easier to read about something than to actually figure things out yourself. I want to experiment with sharding in NoSQL databases? I have to set up virtual machines, set up the software, write scripts to generate testing data, think about how to perform some experiments, and actually run them. Naturally I’d want to collect some data from those experiments; maybe reach a couple of conclusions even. That’s a lot of work. It’s much easier to just read about it. It’s infinitely easier to stumble upon and read an article on “How to Really Get Things Done Using GettingThingsDone2.0 and Reverse Todo Lists” than it is to actually get something done.
The second reason, at least for me, is that it gives me the feeling that I’m learning more about things. In the time it takes me to set up all the stuff above, I could have read who-knows-how-many articles. And it’s true in a sense. The information isn’t useless per se. I’m learning more shallow knowledge about a lot of different things, versus in-depth knowledge about a few things. It gives me all kinds of cool ideas, things to do, stuff to try out. But I never get around to those things, because I’m always busy reading about something else!
So I have taken drastic measures.
I have removed close to 95% of my feeds from Google Reader. I’ve blocked access to Reddit and HackerNews so I’m not tempted to read the comments there. I check hackurls.com (an aggregator for Hacker News, Reddit’s /r/programming and some other stuff) at most once a day. Anything interesting I see, I send to my tablet (at most two articles a day), which I only read on the train (where I don’t have anything better to do anyway). I avoid Wikipedia like the plague.
I distinctly remember being without an Internet connection for about a month almost four years ago. It was the most productive time of my life since the Internet came around. I want to return to the times when the Internet was a resource for solving problems and doing research, not an interactive TV shoveling useless information into my head.
Now if you’ll excuse me, I have an algorithm to write and a website to finish.
The text of all posts on this blog, unless specificly mentioned otherwise, are licensed under this license.