Wednesday, December 3, 2008

Embedding Qt Widgets into QtWebKit

Qt has it's awesome built-in WebKit support which makes it extremely easy to have full-featured browser/html-viewer capabilities in your application (including JavaScript!).


It is possible to embedd any Qt Widget into your QWebPage. The necessary steps for that are quite simple. You have to derive from QWebPage and overload the createPlugin() function, make sure that PluginsEnabled is set for the QWebPage's settings and assign that WebPage to any QWebView.


It is now possible to embed widgets that are known to Qt's runtime MetaType-system into a WebView. You can make a widget accessible using both, the Q_DECLARE_METATYPE macro and the qRegisterMetaType function.


To show the widget, you have to add an HTML object-Tag to your page, like this:


<object type="application/x-qt-plugin"; classid="YourClass" name="myObject" />
It's now visible and can even be manipulated through JavaScript. You can access it's properties and it's public slots.


I've create a small demo that shows you how to do it and what is possible. It consists of a QMake-project (.pro-file), two pairs of header and implementation files for the MyWebKit/MyWebPage and MyWidget classes and a demo HTML page. It should compile and run on any supported platform (Windows, Linux, Mac). Of course only when QtWebKit is enabled in the Qt installation.


Step 1


First, we should derive from the necessary QtWebKit-classes to create our own MyWebView class that always has Qt plug-ins enabled.


MyWebKit.h



#ifndef MY_WEBKIT_H
#define MY_WEBKIT_H
#include
#include

// Derive from QWebPage, because a WebPage handles
// plugin creation
class MyWebPage: public QWebPage
{
Q_OBJECT
protected:
QObject *createPlugin(
const QString &classid,
const QUrl &url,
const QStringList ¶mNames,
const QStringList & paramValues);
public:
MyWebPage(QObject *parent = 0);
};

// Derive a new class from QWebView for convenience.
// Otherwise you'd always have to create a QWebView
// and a MyWebPage and assign the MyWebPage object
// to the QWebView. This class does that for you
// automatically.
class MyWebView: public QWebView
{
Q_OBJECT
private:
MyWebPage m_page;
public:
MyWebView(QWidget *parent = 0);
};

#endif


MyWebKit.cpp



#include "MyWebKit.h"

#include

MyWebPage::MyWebPage(QObject *parent):
QWebPage(parent)
{
// Enable plugin support
settings()->setAttribute(QWebSettings::PluginsEnabled, true);
}

QObject *MyWebPage::createPlugin(
const QString &classid,
const QUrl &url,
const QStringList ¶mNames,
const QStringList & paramValues)
{
// Create the widget using QUiLoader.
// This means that the widgets don't need to be registered
// with the meta object system.
// On the other hand, non-gui objects can't be created this
// way. When we'd like to create non-visual objects in
// Html to use them via JavaScript, we'd use a different
// mechanism than this.
QUiLoader loader;
return loader.createWidget(classid, view());
}

MyWebView::MyWebView(QWidget *parent):
QWebView(parent),
m_page(this)
{
// Set the page of our own PageView class, MyPageView,
// because only objects of this class will handle
// object-tags correctly.
setPage(&m_page);
}

It's now possible to use Qt classes using the above-mentioned object tags.


Step 2


The second step is to create a class that's known by the Qt runtime meta type system. We can't directly use Qt widgets in this way, because runtime meta types need copy-
constructors. So we derive from a Qt widget and add a kinda dull copy-constructor to it.

MyWidget.h



#ifndef MY_WIDGET_H
#define MY_WIDGET_H

#include
#include

class MyCalendarWidget: public QCalendarWidget
{
Q_OBJECT
public:
MyCalendarWidget(QWidget *parent = 0);
// Q_DECLARE_METATYPE requires a copy-constructor
MyCalendarWidget(const MyCalendarWidget ©);
};
Q_DECLARE_METATYPE(MyCalendarWidget)


#endif

We use a calendar widget because it's something that doesn't already exist in HTML and could be quite useful. Additionally, it has some few properties that we'd want to access from JavaScript.


Step 3


The final step is to build the HTML page that embeds the widget and executes some JavaScript on it. Here's my example:

Test.html



<html>
<head>
<title>QtWebKit Plug-in Test</title>
</head>
<body>
<object type="application/x-qt-plugin" classid="MyCalendarWidget" name="calendar" height=300 width=500></object>
<script>
calendar.setGridVisible(true);
calendar.setCurrentPage(1985, 5);
</script>
</body>
</html>

The example set the gridVisible property to true and shows the month that I am born in. Of course, the possibilities seem endless! :-)


Conclusion


The only thing that I am still missing is connecting signals to JavaScript functions, similar to what you do with AJAX. It's possible to export non-visual objects and use them from within JavaScript, too (think of a database connection, for example).


You can download the complete project, which should work out-of-the-box when you have Qt with WebKit support installed, here

Friday, September 12, 2008

Nekthuth - Making Vim Love Lisp

My friend DieMumiee pointed me to a project call Nekthuth that is supposed to be a mini-version of slime, just for my beloved Vim. The web-site looks promising, there are even screenshots and short, but good documentation. Using it it's quite easy to get Nekthuth installed and getting started.


However, my current SBCL is kinda broken. Dunno what I did to it, but after an apt-get update it segfaulted while compiling some scripts and just seems defective. I tried Nekthuth anyways, just to experience a broken pipe. Ouch. :-(


So you're invited to test it out and send reports to me and the original author. I'm quite interested in whether this project could become at least a partial slime-replacement and drive more Lispers to use Vim, and make Lisp available to Vimmers.

Monday, September 1, 2008

Google's own browser: Chrome

I usualy don't like blogging about a blog-post, but I'll make an exception for this one:


Philipp Lenssen writes in his blog that he received a comic-book from Google that describes a new, upcoming browser called Chrome. You can read about everything announced in his blog-post. IMHO there's nothing special mentioned in the list, nothing that hasn't been done before. Probably the most "exciting" thing about Chrome is that the tabs are above the address-bar, which basically makes no difference, but looks different to all the other browsers.


My guess about some technical details about Chrome is that it is built using WebKit as it's back-end. More specifically, using QtWebKit from the new Qt 4.4 release. As Google has been a customer of Trolltech for Google Earth and maybe other tools before, QtWebKit is just the best tool to build a browser from-scratch.


Because usually Google-tools are high-quality and sometimes even revolutionary, we probably can expect more than has been mentioned in the comic from the browser. Nobody will give away his secrets before launch, anyways.


Update:There is coverage of this on heise, too: Google Chrome: Google greift Microsoft mit eigenem Browser an, but it's a bit too focused on Internet Explorer, imho. Some more interesting things about the relation of Google to the Mozilla Foundation, how it evolved and how Google Chrome might change it, can be found here. In this article I've been proven right that they use WebKit as their rendering engine. No word on Qt, though. The probably most interesting thing about Chrome is separating browser tabs in individual processes.

Thursday, August 28, 2008

BeagleBoard - mini-PC at mini-Price

The guys over at BeagleBoard created an integrated chipset including CPU and graphics processing for as little as $149. With this small thingie you could do gorgeous stuff while using very little room. It'd be perfect for a Car-PC, for example. And for only $23 you even get a transparent housing for that little fella. And there's effort to port and maintain ffmpeg for this platform, making it quite multi-media-enabled.


This is probably as cool as Bug Labs wants to be. At least hardware-wise... a raw chipset is not quite as cool as a BUG, is it?

Wednesday, July 23, 2008

New watercooling setup

I've expanded my watercooling-cycle by one graphics card, namely my new (and second) GeForce 8800 GTS 640MB. My modified Thermaltake Orchestra now cools my CPU, the RAM (OCZ-RAM with factory watercooling support) and two GPUs. The only piece that is missing is the motherboard's chipset. I think I will not add watercooling to my current motherboard, but wait till I buy a new one.
Here are a few pics that show how beautiful it looks now.









Thursday, June 19, 2008

Evangelion 1.01: You are (not) alone

Since anime and computer-enthusiasm are strangely connected together, here's some anime-news.

You are (not) alone is a new movie that re-tells the first few episodes of the original Neon Genesis Evangelion anime (and manga) and is the first of a series of four movies. We all know that Neon Genesis Evangelion is one of the best animes ever created and trying to remake it is a very delicate task.


I think, however, that the studios have done a great job with this first movie! The first fifteen to twenty minutes are nearly identical to the original, with a few scenes left out and short cg-scenes added (which I think look a bit misplaced and don't fit in that well). But the later the movie progresses the more scenes are replaced, and as the story develops (which has changed a little bit, too), the movie features quite much new, high-quality content. I don't know whether it was a sound problem on my system, but there's a high pitch sound in the background every time someone talks. I imagined that it could be caused by filtering out background sounds from the original material, but I hope this is not the case. I'll try the DVD on a normal DVD-player when I get the chance for it.


Overall, I am very satisfied with this movie. As I don't have subtitles for it, yet, I didn't understand a few scenes so I couldn't follow the story 100% (my japanese is still bad). I'll watch it with subtitles again as soon as I get the hands on some and find the time, and will post an update.

Monday, June 16, 2008

New introductory book on C++ by Bjarne himself

Bjarne Stroustrup, the "inventor" and first implementer of C++, is about to publish a new book. It's titled Programming - Principles and Practices Using C++ and is going to be published in August. It's already listed at amazon.com. In contrast to The C++ Programming Language, this book is on an introductory level to programming, not only C++. It's something that the C++ community was lacking for years now, with the only alternative of Accelerated C++.

Wednesday, June 4, 2008

xstartonce: Boost your productivity

I've grown accustomed to a way of working on my Desktop which greatly enhances the speed and therefore productivity of working by accelerating switching to certain applications. There are relatively few applications which I use on a daily basis. I'll mention just a few examples here: I use Firefox (well, now it's Arora) for browsing, Vim to edit files, and a shell for doing various stuff. I have to switch between those frequently. Often when coding stuff in Vim, I have to look up information on the net, so I have to switch to Firefox. When I'm done looking up stuff I'll try it out on the console and then switch back to Vim to integrate it into the program. Switching back and forth between applications can be quite a hassle because of the "most recently visited"-paradigm usual implementations of Alt+Tab (or similar) use. The most recently visited window is the one that'll appear first when switching windows. This works great if you have only two windows. It'll get irritating and slow when you have more than two windows and switch back and forth in an unstructured order.


So I've invented this methodology of work: Every regularly used application is assigned a shortcut. It's Ctrl+Alt+F for Firefox, Ctrl+Alt+V for Vim and Ctrl+Alt+C for the console. The program I use for shortcut is selfmade, because it needs one special feature: When the shortcut is activated, the program looks whether there's already a running instance of the application I want to switch to. If there is none, it'll be started (and automatically get the focus). If it is already running, the existing instance will be focused. This works great. When I need my browser, I don't have to think about whether I've already started it, and if so, where have I started it or anything. I just press Ctrl+Alt+F and have a Firefox or Arora window activated.
I have shortcuts for about 8 applications at work, including one or multiple local and several remote consoles, the aforementioned Arora and Vim, Delphi, Microsoft Visual Studio, InstallShield, the production-copy of the application I'm writing and a few more.
Without my shortcut-application I'm really at a loss and struggling with cycling through windows with Alt+Tab.


Until now, there was only one unreleased, Windows-based program which would enable this work-methodoligy: It's called KeyboardGuy, and it's written by me and exclusively used by me.


When I switched to Linux recently, I was really missing that comfort. So I looked for a way to accomplish this with as little work as possible.


The very first try was a shell-script that uses wmctrl to find out the Process- and Window-IDs that I needed. It looked up the process-name via ps and would activate the window if the basename of the specified application and the process-name in ps matched. With my relatively limited knowledge of shell-scripting, I came up with a script like this:

#!/bin/sh

PROCESSES=`wmctrl -l -p | awk '{print $3}'`
DOEXIT=
ps h -p $PROCESSES | while read i;
do
ID=`echo $i | awk '{print $1}'`;
NAME=`echo $i | awk '{print $5}'`;
NAME="`basename "$NAME"`";
if [ "$NAME" = "$1" ]; then
wmctrl -l -p | while read WID;
do
WPID=`echo "$WID"|awk '{print $3}'`;
if [ "$WPID" = "$ID" ]; then
wmctrl -i -a `echo $WID|awk '{print $1}'`
exit
fi
done
fi;
done

echo $DOEXIT
"$1"


There exists one problem, however: The script needs to completely terminate after wmctrl -a is executed. The call of exit that should accomplish this, however, did not work as expected and the line "$1" (which starts the application given as the first parameter) is always executed, even when the window was activated first. Because I was rather unsatisfied with the whole thing and couldn't get it to work, I decided to rewrite it in C++, using Qt (because Qt makes many things much easier for me here, and I wanted to get this done quickly).
The final result is the application which I call xstartonce.

xstartonce


The current version is the second iteration of the C++/Qt-version. The first version would match the basenames of the given application and the path found in ps's output. This leaves one big limitation, though: You can't assign shortcuts to one and the same program. I need this, for example, because I want shortcuts for a local and several remote consoles. Those would look like "urxvt" for the local console and "urxvt -e ssh melchior" for the remote console. Because the basenames match (both are urxvt), the shortcut-application would not be able to switch between those two distinct consoles.

Configuration file


That's why I invented the "named shortcuts". This needs a configuration file (~/.xstartonce), which has a very simple structure: It's <shortcut-name>=<command --param>. My ~/.xstartonce looks like this:

urxvt-local    = urxvt
urxvt-melchior = urxvt -e ssh melchior
firefox = firefox
assistant = assistant
gvim = gvim
vim = urxvt -e vim


Currently, only named shortcuts are possible. This means that for each application you want to launch, you need a configuration-file entry. I've totally abandoned the formerly discussed algorithm to match the basenames of the processes in favor of this approach. I will, however, make both possible in the future. This has an advantage that was bugging me with the Windows-version already: When opening firefox from anything else but my shortcut-app, it'll not get "recognized" by it and a second instance will be opened when using the shortcut next time. With the basename-matching algorithm, I can use apps that I don't want to open twice with distinct shortcuts better.

Temporary Process-ID list


xstartonce creates an entry in /tmp/xstartonce-db.<user-name> for each started named shortcut. The structure is like the configuration-file: <name>=<process-id>. When executing a shortcut, first the name is searched in the configuration. If it is found, the xstartonce-db is searched for the name, and if it is found there, the window-id matching the process is looked up via wmctrl -l -p. If a window is found, it is activated. If either no window or no process is found, the command is executed.

Getting xstartonce


xstartonce has tree dependencies: A C++-compiler, Qt and a UNIX-like operating system with the X window system running. If you meet those criteria, you can download the source-code in my github-repository.

Notice:I'll clean up this blog-post and correct spelling and grammar errors tomorrow. It's already past bedtime, so I'm off to get some sleep :-)


Using keyboard-shortcuts


What I haven't mentioned, yet, is how this all works together with keyboard-shortcuts. Well, the answer is easy: Use whatever tool you know for executing keyboard-shortcuts and run xstartonce instead of the application you want to launch. I'm currently using KDE's shortcut-enabled menus together with xstartonce. You can edit KDE's menu by right-clicking it and choosing "Edit menu". Then you can add new menu-items and assign a shortcut to them.
Here's a screenshot how I do this:
Assigning shortcuts in KDE's menu

Wednesday, May 28, 2008

Arora Configuration System Proposal

Introduction


First, I'd like to show you two mock-ups that I've created so you get
a feel at where I am going.



Here's the first screenshot where I used colors for each individual
section:



And here's another with the same colour for each section:




Every webbrowser has a multitude of configuration settings. Some of
them should be easily accessible by the user and some of them are for
fine-tuning and usually only the defaults should be used. There are
settings of different complexity that require different degrees of
prior (technical) knowledge. For example, an easy setting that every
user should understand is "Restore window position and size when
starting". This is just a simple checkbox in the settings dialog that
a user can either check or not check . A far more complex setting
would be a default stylesheet that should be applied to all web-pages.
First, creating such a stylesheet requires technical knowledge and
second the implications of providing such a stylesheet is not very
easy to understand.



Suggestions


My opinion is that there should be a settings-dialog that is
convenient and includes just the settings that Arora chose to be
accessible to the normal user. For advanced users, there is a
about:config page which includes all settings. But the about:config
should be nothing like Firefox's about:config. Firefox's config-page
is only for hackers and lacks usability. Here's how I'd imagine
Arora's about:config:



First of all, Opera has a nice config-page which we can base on.
You can see a screenshot here:






Navigate to opera:config and you can see it. It consists of many
sections which can be opened and closed. Every section includes
multiple settings, which sometimes have custom editors. The editors,
however, are not as convenient as they could be. For example, to enter
a color, you have to type in the hexadecimal color-value instead of
using a color-picker dialog. Every section includes a "Save" and
"Abort" button, which is very reasonable. If there was no explicit way
to save, you'd have to save after every change of the setting. And
saving often means rebuilding or redrawing something, which can be
quite annoying when just changing one simple value.



Opera has meaningful captions of each setting, too. Firefox, on the
other side, only displays the internal setting-name (which often is
quite verbose, but not as good as Opera's captions). What even Opera
is missing, though, is a description of a setting. I find that very
important, although it is, of course, very much work.



A search-functionality is also present. It even behaves awesome. If
an item in section matches, the section is displayed. Inside the
section only matching items are displayed. This way you get to the
settings you're looking for quite quickly. I think, additionally, the
config-page could include some more filters. Something like filters to
show only users or experts settings. With these filters, the
config-page could even fully replace the settings-dialog. IMHO, this
would be really cool. But again, I think the settings-dialog can
provide a much more optimized display and user interface than an
automatically created config-page ever could, so we should keep a
separate dialog that's hand-coded.



There are settings that are added to the system on-the-fly. For
example, extensions should be able to add settings and have them
displayed in the about-page like any other setting.



Implementation


To sum up which information is needed on each configuration option
that is available, it comes down to this:




  • Category

  • Internal name

  • Display name

  • Type or Class

  • Editor (optional)

  • Value-constraints (optional)

  • Short, meaningful description

  • Default value

  • Flags


The Flags can be used to mark a setting as "For Experts", for example.
In the next chapter I'll discuss another usage of a flag that a
setting could have.

A Class is a combination of Type, Editor and Constraints. The
Editor and Constraints can still be overwritten when using a Class,
though.



A category, in turn, needs some more information than it's bare name,
too.




  • Internal name

  • Display name

  • Icon (optional)

  • Color

  • Meaningful description, may be a bit longer



To make the config-page visually appealing and to easily find
regularly visited options, a category should have a special color. You
can see that this looks good in the mockup I provided.



With these informations, it should be relatively straight-forward
to implement some settings-classes to read and write settings and to
create the config-page.



Reading settings should be as easy as

Settings::Download::Style style = readSetting<Settings::Download::Style>("Download.DownladStyle");

Writing, however, should always occur in a group. Hence there should
be a class SettingsWriter that can be used to write settings. When the
object runs out of scope, it flushes all the changes that have been
made.
SettingsWriter writer;
writer.set("Download.DefaultDirectory","/home/user/downloads");
writer.set("Download.DownloadStyle",Settings::Download::AlwaysUseDefaultDir);



If this proposal is accepted, I will work out some class-prototypes
for further discussion.



Dreams come true


To have an absolutely awesome and really magnificent
settings-system, you could think about automatically storing chosen
settings online. For example, the tab-behaviour usually should not
change no matter from where you are browsing. And now imagine that
certain settings that are marked as shared are stored on your
online-profile and automatically loaded when using Arora from a
different box with the same profile. This online-profile should, of
course, include bookmarks and history, too. Because those are more
sensitive than settings are, they are of course optional.

Friday, May 16, 2008

Arora: The first usable QtWebKit-based Browser

Arora is the name of the first QtWebKit-based browser-project that aims to become usable as a day-to-day browser for the masses. It's a spin-off of the QtWebKit demo-browser which is included in Qt and currently doesn't feature much more stuff than the demo does. It's code is managed in a git-repository at github.com and it's therefore very easy to hack on it. I have already created a git-branch and committed a small bug-fix and other changes myself, too.


This project is just a few days old and can already be used without bigger flaws. It will definitely catch on to Firefox (for basic features) soon. This is very exciting, imho, and I can't wait to see what cool things will happen to Arora in the future.


Here's a screenshot of the version that I compiled myself using Microsoft Visual Studio 2005 Express Edition:


Arora Browser Screenshot

New ELF Linker for GNU Binutils: gold

This totally slipped my radar until now. Ian Lance Taylor, an employee from Google, wrote a new ELF linker that is meant to replace ld as Linux's default linker from scratch. The new linker is called "gold" and is approximately "five times faster linking large C++ applications", according to Ian's blog-post.
It has been released to the public open-source crowd as of March 21st with this announcement.
gold is written in C++ and consists of as little as 50,000 lines of code.
Nice!

Wednesday, April 16, 2008

Passwords in Firebird

Firebird has some funny ways of handling passwords. The maximum length of passwords that is evaluated is 8 characters. Every character after the 8th is silently ignored. That's especially funny because the 'default' password for a Firebird-installation is 'masterkey', which has 9 characters. You can, however, successfully log in to freshly installed Firebird-servers providing the password 'masterke'.
I'm working with Interbase and Firebird for more than four years and just now realized that when a co-worker at our company found it out while learning SQL.
The only program that I know that makes note of that is gsec, which prints a warning when setting the password to something longer than 8 characters.

Friday, March 28, 2008

Why Crisis Core - Final Fantasy VII disappoints me

Man was I glad when my Crisis Core package arrived. I've been already playing this game since Wednesday and have played 7:30 hours until now.


Crisis Core, in contrast to Final Fantasy VII, does not have tactical battles. In fact, they're the direct opposite: They're random. Really random. Well, of course entering a battle was always random for Final Fantasy VII, but in Crisis Core even your special attacks, materia upgrade and Level-Ups are random! Who on earth had that stupid idea? There's a slot-machine-like thing called DMW (Digital Mind Wave - wtf?) in the upper left corner which spins while you beat your enemies. It spins pictures of characters and numbers. When the pictures match on the first and last slot, the "Limit Verge" appears and depending on the outcome of the middle slot you'll execute a Limit Attack.


Everything here is absolutely not influenced by you in any way. You can't tell the slots when to stop, you can't do nothing. Depending on the outcome of the numbers of the Limit Verge you or your materia might level up if the numbers come out right. Theoretically this means that you can gain two levels in twenty seconds or gain no level at all in 5 hours. More fighting doesn't necessarily increase your level either. Well, obvisouly Squeenix was smart enough not to make it as random as a true random number generator, but it's still unpredictable.


The Limit Attacks are somewhat strange, too. Sometimes there are some FMV sequences that tell a part of the story -- right while you're battling totally unrelated enemies or you are on a mission or something, before the attack begins. I don't understand this at all, it makes no sense to me.


You don't have weapons with materia slots, either. In fact, you don't even have different weapons! Instead you have a fixed number of materia slots where you can put your materia, which sometimes increases when finding special items or so. At least you have two Accessorie-slots in the beginning... but again, no weapons. This actually pisses me off most. There are no connections between materia-slot, no slots with doubled materia-exp-increase, no new weapons that do more damage, nothing.


And then there's Material Fusion. You can fuse any two materia together to gain a new one. The outcome, however, is always sooo teeny-weeny better than the originals that it almost never brings you a significant advantage. Plus, you can't predict the outcome at all and sometimes you basically just lose one of the materia, while the other stays the same. Absolutely useless.


Then there's the difficulty of the game. Being a hardcore-gamer I picked the Hard Mode over the Normal Mode. It lasted 30 minutes and I nearly threw my PSP through the room, out of desperation. I died multiple times on the first hard enemy, and the bad thing about that was that there's 5 minute sequence that YOU CAN'T SKIP before that battle. Oh jesus how I hate this! That was the reason why I didn't finish God of War, too, by the way. Un-skippable sequences are the most sinful thing a game-developer can ever do. (By the way: In Crisis Core you can ALWAYS interrupt the game by pressing Start. This somewhat makes up for not being able to skip sequences... but only somewhat.


So I decided to start from the beginning in the Normal Mode. This was a good decision, because the game basically becomes child's play after that. It's actually so easy that it's making it boring at times. Most of the time, when hitting the enemy, it's pushed back and gives you time for the next slash. This makes matches where you don't get a scratch on more than common.


The mission system is used for leveling and resembles Monster Hunter in many ways. Because you're more often in areas without enemies than you were in Final Fantasy VII, it's harder to run around and level up as I often did in the original. The missions kick in here, which you can enter on every savepoint. Pick a mission by category and difficulty, enter it, beat it, get a reward, go on to the next mission. Those are always in the same environments, like the various levels in Monster Hunter. Missions are actually a bit less boring than running around on the field to fight the same monsters over and over again. That's what I did in Final Fantasy VII for hours... or well, actually even for days and weeks. :-)


Oh and here's the moooooost disappointing thing ever: You don't have a party! You're always running around alone as Zack and you can't switch party members! That's so unbelievably boring!


There are good things about the game anyways and it's quiet fun to play, but I must say that I would have more fun if I put FFVII on my PSP again and played that for the nth time. It's just so much more tactical and muuuuch deeper than Crisis Core. Crisis Core seems to be meant for the occasional gamer and not the typical Final Fantasy RPG fan. It's more an action game with a good story than an RPG if you ask me. I hope that there are at least 60 hours play-time in that game, but I'm not too sure about that yet... I'm looking forward to find out more as the story continues, maybe I'll write some positive aspects of the game after I've finished it.


That being said, the game is still much fun. But again, it's not as much fun as Final Fantasy VII was.

Sunday, February 17, 2008

New hardware for my lil server

I recently ordered new hardware for my little file-serving, downloading and IRCing box. As the nature of this 'server' is to be always on and not demanding powerful hardware, I went for the following components:

  • Intel Celeron 420 (1.6GHz, Single-Core, 35W power-consumption)
  • ASUS P5B-VM Motherboard (5 SATA-ports, GBit ethernet, on-board VGA)
  • Crucial 1GB DDR2-667
The hardware that was being replaced was:
  • Tyan Tiger motherboard
  • 2x Pentium III 1GHz
  • 2x 512MB SD-RAM
  • 2x Adaptec 1210SA
  • 1x Intel 1GBit Ethernet (eepro1000)
  • ATI Radeon 8500
(Note that I currently have the Intel Ethernet adapter still plugged in since network didn't work out of the box and I haven't found the time to fiddle with it, yet).

As you can see this box is optimized on low cost, low power consumption and replacing as many of the old parts as possible. At a total of 136 EUR (the CPU was only 35 EUR!) including shipping, I'm really satisfied. One additional Adaptec controller would've cost 50 EUR and would've added only 2 SATA ports, so this was probably the best solution. I still have 2 SATA ports unused on the mainboard and have 2 SATA-controllers lying around ready to be plugged in if even those will be used up. Finallly I have potential to grow my RAID even bigger. :-)

Fixing raid-5 failures, the adventurous approach

You might remember the trouble I had with my raid5 before. Well, it's still not 100% sorted out, but I know the cause now. It really was a faulty drive! I came to notice that after I replaced the motherboard, CPU and RAM with new components. After I've added them and booted into the system (which worked flawlessly on the first try, by the way, although the hardware is absolutely unrelated to the previous one) I noticed a click-sound from one of the harddisks. I immediately realized that I bought new hardware for nuthin. But at least I was sure which component was causing the failure now, plus I got 5 free SATA ports to upgrade the RAID. Previously I had non unused ports, leaving no potential for a possible upgrade. But somehow the raid got messed up in the process. I wasn't able to assemble it with the remaining 3 discs because one disc was always added as spare. So I had 2 functional devices and one spare added, which is obviously not enough to run the raid. This is due to some corrupted superblock, but luckily the superblock is just metadata which can be recreated. If I knew the correct devices and slots they corresponded to before all this happened, I could've created the array with mdadm --create and the correct params. Unfortunately, I did not know the exact params so I had take a more... adventurous approach. There's a perl-script on the linux-raid wiki which permutates over each possible combination of devices (including one missing device) and tries to mount the created array. It does everything in read-only mode so no actual data is being touched, only metadata. If it could mount the raid it prints the mdadm --create command used to build it, stops the array and goes on. You can then execute the creation-commands yourself and see if everything's right. In my case, luckily it was and I got all my data back. Note that I had to connect the failed drive for this to work because it always replaces one given device with 'missing' ('missing' tells mdadm that this device is, well, missing) instead of adding 'missing' to the devices-list. This is because it's not supposed to recreate a partial, but only a complete array. So you need to provide ALL raid-members to the command-line, otherwise it won't work. It should be fairly easy to hack the script to work for partial arrays, too, but it was easier for me to add the drive again than to hack perl-code.


After this the raid was up and I needed to mark the drive as faulty and remove it so it can't cause problems anymore. It's always a bit problematic to map the device-names (/dev/sdx) to the real harddrives and you might pull out the wrong one, possibly leading to more problems. I found out a reliable way to identify the drives:

hdparm -I /dev/sdx | grep 'Serial Number'
This will print the serial number, which usually is visible on the actual discs, too. Somehow the -I option to hdparm never occured to me before. The serial-number matched one of my disks and so I was able to locate and remove the faulty drive.

Yay!

Next step is to contact the reseller for a replacement. I hope the next bad drive will be less problematic.

Thursday, February 14, 2008

Qt debugging with Visual Studio 2005

zbenjamin, one of the fine folks from the Qxt-project just gave me his additions to the AutoExp.dat-file that lets you debug native Qt-types (e.g. QString) far more easily. Here's the before/after-comparison:

Before



After



How it's done


And here's what you have to do to use it yourself:
First, open up the file

C:\Program\ Files\Microsoft\ Visual\ Studio\ 8\Common7\Packages\Debugger\autoexp.dat

Important: Under Windows Vista, you need to open the file as Administrator, because it is not writeable by the user and the program-files-virtualisation will get in your way.

Then, add the following lines under the [AutoExpand]-mark:
QObject =classname=<staticMetaObject.d.stringdata,s> superclassname=<staticMetaObject.d.superdata->d.stringdata,s>
QList<*>=size=<d->end,i>
QLinkedList<*>=size=<d->end,i>
QString=<d->data,su> size=<d->size,u>
QByteArray=<d->data,s> size=<d->size,u>
QUrl =<d->encodedOriginal.d->data,s>
QUrlInfo =<d->name.d->data,su>
QPoint =x=<xp> y=<yp>
QPointF =x=<xp> y=<yp>
QRect =x1=<x1> y1=<y1> x2=<x2> y2=<y2>
QRectF =x=<xp> y=<yp> w=<w> h=<h>
QSize =width=<wd> height=<ht>
QSizeF =width=<wd> height=<ht>
QMap<*> =size=<d->size>
QVector<*> =size=<d->size>
QHash<*> =size=<d->size>
QVarLengthArray<*> =size=<s> data=<ptr>
QFont =family=<d->request.family.d->data,su> size=<d->request.pointSize, f>
QDomNode =name=<impl->name.d->data,su> value=<impl->value.d->data,su>

Now restart Visual Studio and you should be good to go.

Monday, February 11, 2008

More slime goodness

Remember the Slimy Lisp Video I've posted earlier this year? Well, some other blogger, Peter Christensen has put more effort into it and has written a reference/annotation for the video, including a timeline, a transcript of important parts and introductory explanations on how to set up SLIME to use that video. This will give SLIME-beginners an even better kickstart.
Thanks very much for your effort, Peter!

Thursday, February 7, 2008

Fixing my software raid-5 with mdadm

On my server-box I have a software raid-5 /dev/md0 consisting of 4 500GB SATA-harddisks, namely sda1, sdb1, sdc1, sdd1. When I was working on my other box where I was pulling off some dd-stunts on a lvm-volume my raid suddenly died on the server. There was some output in dmesg that both sda and sdb are somewhat corrupt and that they've been removed from the raid, leaving it unfunctional (you need to have at least n-1 disks in a raid-5 to keep it operational). I was very shocked by this. I restarted the PC and tried to re-assemble the raid with

mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1
to no avail. mdadm said 'bad superblock on device /dev/sda1' (or similar) and leaving out sda1 worked and I had the mdadm assembled with 3 out of 4 disks. That's of course not satisfactory. I stopped the raid and ran S.M.A.R.T.-checks on each of the 4 disks with
smartctl -t long /dev/sdx1
. This took over an hour so I went to sleep and checked the results the next day -- 100% error-free, according to smart! That's really strange. Assembling the array still does not work because of sda1. I opened up sda in cfdisk and saw the exact same partition-size as on sdb and the others, but I knew that something was corrupt. So I wrote the partition-table to a file to back it up, removed the partition and re-added it. Then I used
mdadm --add /dev/md0 /dev/sda1
to re-add the partition that was formerly part of the array, anyways... mdadm did it's job and recovered the raid. You can watch the progress by doing
cat /proc/mdstat
. It took around 7 hours or so to complete, and now the raid5 is fully functional again.
What a horror-trip! I'm still wondering what was going on and why sda1 has been kicked out of the array.
A small addition: After I've fixed the raid it was ok for a day or two, but then one day when I came home I noticed it broke again. I remembered that I stepped onto the USB-keyboard that was attached to the server right after I came home and found an unhandled IRQ-oops in the kernel-log at what happens exactly that timespan. So my suggestion is that the USB-handler somehow messed up something, which in turn has killed the RAID again. But I'm still investigating the issue, for now rebooting and forcing the assembly worked fine. I hope I'll not have any more problems with it...

Converting lvm to a normal partition

I've recently set up a new gentoo-box and first decided to use lvm2 on my root. Well, I ran into some issues with the kernel and initrd which I could figure out and fix. But then I noticed that, because of the lvm, I won't be able to access the disk from Windows with the free ext3-drivers that are available. Linux will even boot faster because I'll have no need for the initrd anymore. That's when I decided to get rid of the lvm.And that's actually easier than you'd think: If you have a spare-partition or -harddisk around that is at least the size of the logical volume that you'd like to convert to a partition, you can easily do this with dd. Imagine that /dev/vg/volume is a logical volume that consists of only one partition, /dev/sday:

sh# dd if=/dev/vg/volume of=/dev/sdbx bs=8M
sh# dd if=/dev/sdbx of=/dev/sday bs=8M

That's it. This will back up the logical, continuous data that's hosted on the lvm to a partition. After the first dd you'll be able to mount /dev/sdbx and see how the content of /dev/vg/volume has been copied. The mounted partition's usable size will be exactly the same as the volume's size, even if the partition itself is much bigger. That's because the filesystem on it will still be the same size it was before. You could (but it wouldn't make much sense because we want to move the data to the other partition anyways) fix this with resize2fs (if you use ext2 or ext3, that is).
The second dd copies the data back to the partition that it was formerly stored on, but without the additional lvm-abstraction. The lvm will be overwritten by the 'flat' filesystem-data. If sdbx happens to be bigger than sday, an error will be printed that dd reached the end of the partition. This is nothing to worry about since the data left on sdbx is not interesting to us anyways.
You can fix the filesystem-size to the actual partition size with resize2fs. Since the lvm itself needs some space, too, it will be slightly (a few bytes) larger now.

Friday, February 1, 2008

Accessing MS SQL UID-fields with Qt

When working with a database that relies heavily on uniqueidentifiers, I experienced problems with handling those fields with Qt's built-in SQL-classes.
First, I connect to the database via the QODBC-driver. Then I fetch the results of table 'a' and tried to fetch the corresponding results in table 'b', which are referenced by foreign keys. Here's a code-snippet:

QSqlQuery a(db);
a.exec("select id from a");
a.next();
QSqlQuery b(db);
b.prepare("select id from b where b.id_a=:id");
while(a.isValid())
{
b.bindValue(":id",a.value(0));
b.exec();
// ERROR: Operand type clash: image is incompatible with uniqueidentifier
a.next();
}

So Qt converts the binary data it received from the uniqueidentifier to a binary blob of type image, it seems.
There's a simple way to convert the GUID that is stored in a.value(0) to a formatted UID-string, which in turn can be used to bind the value of the second query.
   QString uuidToString(const QVariant &v)
{
// get pointer to raw data
QByteArray arr(v.toByteArray());
std::string result(arr.constData(),arr.size());
assert(result.size() == 16);
const char *ptr = result.data();
// extract the GUID-parts from the data
uint data1 = *reinterpret_cast<const uint*>(ptr);
ushort data2 = *reinterpret_cast<const ushort*>(ptr+=sizeof(uint));
ushort data3 = *reinterpret_cast<const ushort*>(ptr+=sizeof(ushort));
uchar data4[8] =
{
*reinterpret_cast<const uchar*>(ptr+=sizeof(ushort)),
*reinterpret_cast<const uchar*>(++ptr),
*reinterpret_cast<const uchar*>(++ptr),
*reinterpret_cast<const uchar*>(++ptr),
*reinterpret_cast<const uchar*>(++ptr),
*reinterpret_cast<const uchar*>(++ptr),
*reinterpret_cast<const uchar*>(++ptr),
*reinterpret_cast<const uchar*>(++ptr)
};
// create a uuid from the extracted parts
QUuid uuid(
data1,
data2,
data3,
data4[0],
data4[1],
data4[2],
data4[3],
data4[4],
data4[5],
data4[6],
data4[7]);
// finally return the uuid as a QString
return uuid.toString();
}

Using this function, you can easily bind the values to the second query:
b.bindValue(uuidToString(a.value(0)));

Edit:Starting from Qt 4.4.0 (I used the latest snapshot) QVariant supports GUIDs and hence this function fails AND is unneccessary.

Thursday, January 31, 2008

SQL proposal: Select deleted and updated records

Here's another proposal for an addition/extension to SQL. In the earlier post I said that it should be possible to get feedback from delete- and update-statements. Here's an even better proposal that adds some syntax to SQL and allows you to fetch the records that have been updated or deleted.
Here's how it should look like:

delete from customers where marked_as_deleted="true" select id, name

Executing this will behave like a normal select-statement, while the results will be the id and the name of the customers that you have just deleted. With this it is possible to create a message like
The customers "Tom", "Bill" and "Lara" have been successfully deleted.

This is a good idea because it gives the user more feedback on the operation that just has been completed. With update-statements it's very similar:
update customers set marked_as_deleted="true" where last_action > "01.01.1998" select id, name

A possible user-feedback would be
The customers "Tom", "Bill" and "Lara" have just been marked for deletion, because they are too old.

You can simulate this, of course, by first selecting the customers and then running an update/delete with a) the same where-clause (which is unsafe because the data might've been changed in the meantime) or b) for each fetched id, which is slower because it doesn't let the server handle the whole operation on it's own.

Clean SQL-Server Client-API

Introduction


I often stumble upon some limitations or an awkwardness when dealing with the Firebird SQL-server. These are the big points which you can't fix in your client-application, because the API and some design choices do not allow it:

  • There's no progress feedbeck except when fetching datasets. Updating and deleting, however, is just one (potentially huge) blocking call.

  • There's no clean way to keep my datasets current. When they've been changed from a different (commited) transaction, I don't get any kind of notification.

  • Buffering datasets is a tremedously hard task to program, if you want to do it efficiently (both speed- and memory-wise).

  • Multithreading is a no-go optimization, because you need to establish a new connection for each thread. If you want to do intelligent multithreading with Firebird, the task becomes exponentially more difficult and error-prone.



Requirements


So here are the requirements I would expect from a well-designed client-API for Firebird, which would allow me to solve the aforementioned problems.

(Cumulated) progress feedback


This is applicable to update and delete (and maybe other) statements only. For select-statements, it's not needed. The client-app should be notified about the progress of the operation it just executed. Otherwise, (unexpectedly) very long executions may lead to the wrong conclusion that the server 'hangs' or there is something else going wrong. You should be able to create visual feedback on the client-side for better end-user experience. I often see myself writing client-side loops instead of simple updates with where-clauses for the sole reason to provide a progress-bar to the end user, since it is a strong usability-requirement of good UI design. Of course you can't know how many datasets are to come, but you at least should know how many datasets have been processed.
There are two ways in which I can imagine the Firebird-API to support this:
a) provide a call-back function to the execute-function:
      int execute_progress_info(int datasets, void *data, some_other_args) {
if( time_elapsed > 10000 )
return 0;
else
return 1;
}

If the call-back function returns 0, execute aborts it's operation (if possible).
b) make the client call the execute-function repeatedly and have it return the number of processed datasets. If the execute-function returns 0, the client stops to call it:

     int datasets = 0, temp;
while(temp = sql_server_execute(statement)) {
datasets += temp;
printf("%d datasets processed.",datasets);
/* possibly process some window- or system-messages here */
}


Integrated SQL-parser and editor


I often see the requirement for the client-application (especially for client-library wrappers) to parse and understand the SQL-query. For example, the IBObjects Delphi library does this to modify the SQL-string. Currently, there is much of the SQL-parsing logic re-implemented by IBObjects, while this should be accessible through the SQL-server's API to remove code-duplication and incompatibilities.
It should be possible to create a program-readable SQL-'tree' from SQL-code and vice-versa. With SQL-'tree' I mean some kind of data-structure that represents the SQL-code in-memory, much like how it should look like when it is compiled the server. This in-memory structure can then be modified (e.g. add or remove parts of the selected fields or add an expression to the 'order by'-clause) by the program in a error-proof and logical fashion and converted back to SQL-code or directly executed at API-level. This could be used for SQL-editors that have rich completion-features like today's IDEs like Visual Studio, NetBeans, etc. have. There are SQL-editors that provide those, but they duplicate the SQL-parsing-code, too, and are likely to break with unknown or new syntax. I will provide another good example for this later.

Partial exposure of SQL-subsystem functionality


What I mean by that is that I want to be able to check whether an existing in-memory dataset (that may or may not be in the database) matches a specific where-clause or order datasets that exist in the client's memory only with a given 'order by"-clause.

Event-notification systems with additional information


The event-mechanism as it is right now is kinda useless for many purposes I'd like to use it for. It's design is very simple, but this limits it to a very narrow range of applicable use-cases. For example, when monitoring a dataset for server-side changes, I can currently do the following: I have a base-name for the event, which ideally corresponds to the table's name. Let's say I want to monitor the datasets in the table 'customers' that I've partially fetched in a list. To monitor changes to each existing dataset, I would need two different events: customers_del_$ID and customers_upd_$ID, where $ID is replaced with the dataset's ID. I have two triggers for the table which fire off the corresponding events like this:
  create trigger customers_del_event for customers active after delete as
begin
post_event 'customers_del_'||old.id;
end^


and

  create trigger customers_upd_event for customers active after update as
begin
post_event 'customers_upd_'||old.id;
end^


I then register to two events for each dataset to monitor changes or deletions. When the upd-event triggers, I can re-fetch the dataset by ID and if the del-event is triggered, I can simply remove it from the list. But there's one caveat: When the ID of the dataset changes, I have no chance at all to notice that. There's no way the client can be notified about the ID-change by the server. Of course there are workarounds for this limitation, but none of the ones I can think of come with zero overhead.

Now what I would like to see is the ability to pass (arbitrary) parameters to the events. If it's possible to pass a 'new_id' parameter to the upd-event, all would be fine. I could simply re-fetch the dataset with it's new ID. Or even better, pass all the required fields to the event and eliminate the need to re-fetch the dataset alltogether, because the client can update it's data from the received event's parameters. Here's how I would imagine the syntax:

  create event customer_upd(
id integer,
name varchar(100),
street varchar(100),
city varchar(100))^
/* When parameters for events are possible, the event needs to be declared first */

create trigger customers_upd_event for customers active after update as
begin
post_event customer_upd(new.id,new.name,new.street,new.city);
end^


Thread-safe Client-API


Thread-safeness for the client-API would be a real gain. The whole programming world becomes more and more parallelized to increase speed and responsiveness of programs. More so with Dual-Cores and Quad-Cores on desktop-PCs.
One possible approach that does not provide more speed, but would make it possible to access the same connection from multiple threads would be to have one 'db-access-thread' automatically started in the client-API that simply communicates in a thread-safe way with every client-API-function (e.g. pipes or something similar). That way, each thread would still have to wait for the other thread's operation to finish, but it would make it much easier to program more responsive applications by putting database-operations that will otherwise block the GUI-thread into a separate thread, without the need to create a new connection (which would decrease performance and, additionally, I reallly like to have only one connection per application). This can of course be implemented in the client-application (or 3rd-party client-library) itself, but I currently know no implementation which bothers to do so. Having such neat features in the client-API makes Firebird a better product 'out of the box', IMHO. Another idea is a completely asynchronous API, which would be the best solution. This would be done by making every API-call non-blocking and have an event delivered to the application when it finishes. Asynchronous handling should always be prefered to threaded handling when it comes to networking stuff, since it usually scales better and puts less stress on the client. On the other hand, it's usually far more complicated to program.

Dataset-buffer in Client-API


When programming lists that display datasets, you usually fetch the datasets from the database and then put them in some kind of buffer of your own. Programming that buffer can become a major task if the list is very large. There are techniques to speed up fetching this list, for example what I call 'lazy' data-reading:
You first read in all the dataset's IDs and store them in a structure that looks like this:

  struct dataset {
db_key id;
struct dataset_content *content;
};


content is initially NULL and when the program tries to read the content for the first time, it's fetched from the database with a separate cursor than the original that fetched the IDs. The two SQL-queries would resemble something like this:

  select id from customers where city='New York' order by name; /* Fetch all the IDs */
select name, street, city from customers where id=? /* fetch a specific dataset */


This 'lazy fetching' speeds up the creation of lists that *require* to know the total size of the result-set, because fetching the IDs is a few times faster than fetching the data, and fetching the data is delayed until it's displayed. Most lists require the total size of the result-set, because implementing the scrollbar-logic for those lists without that information is not possible in any usable fashion. Nearly all SQL-based programs have very bad handling when it comes to lists and scroll-bars. Additionally optimization of small-object-allocation is nearly a MUST in this situation. Otherwise performance is likely to degrade to unusable levels. This is true for any list-buffer, though, and I think it should be part of the client-API so the application does not need to worry about it. As much optimization as possible should be included in the client-API, because it'll make Firebird look like a more stable and faster database than others in subjective comparisons. Again, application-developers are unlikely to write these kinds of optimizations themselves in today's age, because bigger hardware is cheaper than software-optimization (especially at the client-side). Combining this with a multi-threaded or asynchronous approach is the ideal solution. I have created a proof-of-concept implementation with the techniques mentioned above and I think I'll write a detailed describtion of the process and the results, soon. IMHO the usablity and responsiveness of what I've created is as far as it can possibly get.

Creating a 'live' list


Without those features it's still possible to write a client-application that shows a 'live' list of datasets of a specific range in a specific order. With 'live' I mean that new, deleted and updated entries are visible immediately in real- time, without any polling going on. This of course decreases network-traffic and at the same time increases speed. First, I'd like to describe the approach that I see optimal for implementing this without the aforementioned desired features. There are a few database-prerequisites that each table you want to watch 'live' has to meet: It needs a 'updated'-field which is set in 'before insert'- and 'before update'- triggers to the current datetime. Then it needs to post the events 'table_inserted', 'table_updated' and 'table_deleted' in the corresponding 'after'-triggers. The client-app, in turn, first selects the desired data and listens to the table_xxx triggers. Each time a _inserted or _updated trigger is fired, it adds a 'and updated>=:last_update' to it's original select-statement to fetches only the updated datasets. Existing datasets (identified by the ID) are updated accordingly, and new datasets can be added to the list. Since you don't know how the list might be sorted (after all, it's hidden in the select-statement which may be user-provided), the best bet is to add those datasets to either the top or the bottom of the list. The 'if the ID changes, I'm out of luck'-problem applies here, too. In that case, the updated dataset with the new ID would be interpreted as a new dataset and I would see the same dataset twice in the list.
For best performance, you should include the 'lazy fetching'-optimization I described earlier. You first fetch the IDs only, then fetch each dataset as it is required. You need to separate selects for this. The first select is like the original, but with the fields replaced with the 'id'-field only. The second adds an 'and id=:id' to the where-clause, with all previous where-clauses put into parantheses so that no operator-precedence messes up the result. Those are both difficult tasks to program, because it requires some effort to parse and modify the SQL structure. The lazy fetching is difficult, because, when done multi-threaded or asynchronous, it'll quickly become complex and confusing. Parsing the sql-statement is very complex, too, since writing a bullet-proof, Firebird-compatible sql-parser isn't an easy task. That's why there should be a 'cursor'-like object in the Firebird-API that provides optimal fetching and buffering means, and some functions to parse and modify an sql-statement.

Wednesday, January 16, 2008

Got BUGs?!

You just have to check this out, it's extremely cool:
http://www.buglabs.net/products
Bug Labs manufactures and sells the BUG device, which is a small micro-controller in a very stylish housing. It got an ARM CPU at it's core (no clock-speed mentioned) and 128MB of main memory. It features hardware graphics acceleration and MPEG4 decoding and encoding, WiFi, USB, Ethernet and a hell of a lot more interfaces.
If it wasn't for the 300 bucks this BUG costs and world-dominating gadget thingie... or whatever.

Monday, January 7, 2008

Slimy Lisp video

There's a great video that gives you a kick-start in learning SLIME, the extremely advanced Lisp-mode for Emacs.
You can find it on common-lisp.net here: http://common-lisp.net/movies/slime.mov
It's very interesting even for non-Lispers, since it shows off features the average or professional C++, C# or Java IDE can only dream of!