Friday, December 28, 2007

C and C++ interpreter

I've heard about C interpreter before, but a mostly functional and compliant C++ interpreter was new to me. I just found out about one which you can find on the CERN homepage: It supports quite a large portion of ISO C++. The differences are explained in this document:
I haven't tried it out myself, yet.

Wednesday, December 26, 2007

C++ glue

My currently biggest concern with programming languages is missing interoperability. When you write libraries in C++ you can't use them in *any* other language without quite some effort. Good examples are the Qt and KDE-libs. Other languages are completely incompatible with C++ and it's impossible to write really reusable code in C++. Since C++ is one of my favourite programming languages, I'd really like to create a 'bridge' which converts C++-interfaces to pure C-interfaces, which in turn can be used from any programming language. It is possible to create C-interfaces from within C++ with the extern "C" keyword. This way you can have the implementation in C++, but the header-file will be completely valid C-code. C-interfaces are the smallest common interface which usually every sane language supports. If it has dynamic object loading, it usually has support for objects using C-interfaces. This is a good thing, because it is possible to wrap C++-interfaces in C. There are, however, a few things you need to take care of.

ObjectsC does not know about classes and objects with methods. You can simply simulate it with naming- and parameter-conventions with C-functions. This is somewhat a no-brainer.
OperatorsOperators are nothing but simple functions with special syntax in C++. You can't use that syntax in C, so I'll convert operators to methods with special names like "assign_foo()" for the assignment-operator "=".
TemplatesIn the first step I'll ignore templates. In the second step, you'll have to provide a list of types you want wrappers for template-classes and functions to be generated for. You'll need to say "I want the template-function 'foo' to be available for the types 'bar' and 'baz' in the C-interface".
RAII and auto-objectsMost (nearly all) languages don't support stack-based user-defined objects, so I'll only work with pointers in the C-interface. There'll be no RAII, sorry.
OverloadingC does not support function-overloading, so with overloaded functions I need to append a number to the function-name. This means that "void foo(int); void foo(float);" will convert to "void foo(int); void foo_2(float);".
Constructors/destructors and new/deleteSince there are no stack-based objects in the glue-code, there'll only be 'new'ed objects. This means that in the wrapper-interface creating an object and calling the constructor is stricly bound together. For each class "foo" I'll create constructor-functions "create_foo" and "create_foo_2" and so on, for each ctor defined. The copy-constructor however will get a special name of "copy_foo()". dtors are expressed by a prepending ~ in C++. This character is no valid character in a function-name otherwise (neither in C nor C++). The dtor will be called "destroy_foo()".
Because normal methods start with the classes name, special functions like ctors, dtors or operators start with the action's name instead to make sure there'll be no name-clashes.
ExceptionsExceptions are a complicated topic. There's a whole topic dedicated to it later.

For a very simple class "foo" with a ctor, cctor, dtor and a function, there'll be the following wrapper-functions without name-mangling and with C-linkage:
struct foo *create_foo();
struct foo *copy_foo(const struct foo *obj);
void foo_set_name(struct foo *obj);
void destroy_foo(struct foo *obj);


In C++ you can throw any classes you want as exceptions and those classes are not explicitly marked as being used as those. This means that the glue-generator has NO idea what could've been thrown by the C++ functions (nor does the C++ libraries user, by the way). That's why there needs to be some configuration where the user can declare which exception base-classes will be caught. std::exception will always be caught of course. The C-interface will provide an error-message and an error-code to the C-user. The error-code will always be 0 for std::exception derived classes by default and only the error-message is filled with content. There'll be a source-code file called exception_translators.cpp which includes functions that are used to extract error-message and error-code from exception objects. Because the C-interface will rewrite all by-value return-types to by-reference (pointers) except for built-ins, it's possible to simply 'return 0' in case of an exception. But this does not mean that you should check for 0 in code that uses the C-interface. Always use the function last_exception()! last_exception returns an exception-info-object with the methods "const char *exception_get_message(const struct glue_exception *obj)" and "int exception_get_code(const struct glue_exception *obj)".
There are the following functions for exception-handling:
struct glue_exception *last_exception();
const char *exception_get_message(const struct glue_exception *obj);
int exception_get_code(const struct glue_exception *obj);
void destroy_exception(struct glue_exception *obj);

And they should be used after *each* call of a wrapped function which can potentially throw. This is a major headache of course, but that's just how error-handling in C HAS to work. There's absolutely no other way. Here's an example on how to "catch" an exception:
name = foo_get_name(t);
if( e = last_exception() ) {
printf("Exception: %s\n",exception_get_message(e));
} else
printf("Name: %s\n",name);

This should be everything needed to finish this project. I'm sure I'll post more on this topic later and will probably even get a working implementation soon, so stay tuned.
I already have created wrapper-code that examplifies how the generated code should look like. In there's the "C++ library" consisting of test.h and test.cpp and the wrapper-code consisting of "result.cpp" and "result.h", while result.h is compilable with both C++ and C compilers. The main.c is a C-application which uses the C++ classes. Code speaks more than a thousand words. :-)

Monday, December 17, 2007

German introduction to the C++ standard library

Earlier this year I've written an introduction into the Standard C++ Library. The text fills four pages and includes containers, strings and streams. It should fit the needs to German-speaking C++ newcomers quite well and can be used as some kind of a kick-start.
Here's the link:

Friday, December 7, 2007

Showing framerate and temperatures in-game with RivaTuner

RivaTuner is a very tricky program. It's user-interface is like a maze and it's virtually impossible to find some features it provides for the average user. The most interesting feature I know of is displaying the CPU and GPU-temperature in an on-screen-display in-game. With this you can easily check your temps while playing.
To do this, you need to first start the hardware-monitoring server. You need to create a Launcher-item to configure this. Switch to the register "Launcher" and press the Plus-button below the list.

Then you need to select "RivaTuner module activation item" in the following dialog.

In the module picker use the following settings: Module type is "Low-level module" and Module name is "Hardware monitoring". You can leave the name empty and it'll create a default-name for you. After you've pressed "Ok" it'll show the new item in your Launcher-list.

Launch the Hardware monitoring sub-system by double-clicking it in the launcher list or picking it from the context-menu of the tray-icon (if enabled). You'll be presented a new window with lots of bar-charts and a few buttons.

It doesn't seem to work on the system I'm creating this tutorial on, but this shouldn't be a problem.
Now right-click one of the bar-charts and bring up the Setup dialog.

This beast has only one important setting right now. You need to enable "Show xxx in on-screen display" and run the monitoring server. After you've done that, there'll be a new tray icon in your system for the monitoring server where you can enable and disable the on-screen display. It's enabled by default, so we shouldn't worry about that right now.
You should already have an OSD in your games now. You can try it by running any DirectX or OpenGL application. I recommend the Quake 3 Arena demo for this. :-)
Back in the window with all the bar-charts you can click the "Setup" button in the lower right corner. After this the hardware monitoring setup will pop up and you'll be able to select all the plugins you want to have displayed in the OSD. Just enable the desired plugin, press Setup and enable "Show xxx in on-screen display" like we did before.
There are some RivaTuner-plugins around which are not shipped with the default-installation, so you should try google when you don't have access to all temperatures you want to display.
This is what it looks like with my current settings:

Tuesday, November 27, 2007

RAII as an alternative to Garbage Collectors

This text I've written is actually already pretty old, but I never really 'published' it on the web, because I was kinda too lazy. :-)
The topic is comparing RAII (Resource Acquisition Is Initialisation) to common Garbage Collector practices and why, in my opinion, Garbage Collectors are not needed in this world.
So here's a link, enjoy:
Garbage Collectors vs. RAII

Thursday, November 22, 2007

boost::asio, Visual Studio 2005 and Windows 2000

As we upgraded our compiler to 2005 earlier this year, I converted one project from Visual Studio 6 to the 2005 version now. It is a service-application and I struggled with it because it did not start. Then I ran the .exe from Explorer instead of the Services-management console to see that it required the (POSIX) function "freeaddrinfo", which is not present in Windows 2000's ws2_32.dll. This is a serious problem, because we want to support Windows 2000 Server edition for that software, especially since we run that version ourself! So I digged into it and found out about this:

Lost backwards-compatibility

According to this Microsoft-Blog the old versions of Visual Studio (read: 6 and earlier) defined getaddrinfo, freeaddrinfo and others as macros that called in-line functions that were declared in the now-legacy wspiapi.h header. When including Winsock2.h nowadays with VS 2003 (.NET) and higher, you don't include the wspiapi.h header anymore and use the usual function-declarations from ws2tcpip.h instead. Those functions now are resolved at run-time to be loaded from ws2_32.dll. Windows 2000's version of this dll does not provide these functions, because at that time they did not yet exist and were expanded to in-line functions instead.

The problem with asio

And here comes the trouble with boost::asio: boost::asio does not let you include the old Winsock-headers when compiling for a Windows-target greater than NT-version 5.1 (this means XP). It will #error out with the message "WinSock.h has already been included".


To circumvent this, you need to define _WIN32_WINNT to something lower than 0x0501. 0x0500 is quite resonable for this.

Friday, November 16, 2007

Making .NET-assemblies available to native languages

As jb from #mono pointed out, there's already a tool that converts Assemblies to corresponding glue-code in C, called cilc. It's included in mono and uses mono runtime libraries, so you need to have mono installed to use it.
I could only find a man-page here, but no documentation so far, but it should be fairly easy to use and understand the C-code, I guess.

Making C++ libraries available to .NET and native languages

I just got the following idea on how I could expose my pure, unmanaged C++ interface to .NET-languages. The idea is taking two steps:

Step 1

I create a C-interface from my C++ classes, which should be rather easy. Take the class "class Foo { void bla(); };" for example, which would be wrapped into C-code like "struct foo *foo_create();" and "void foo_bla(struct foo*);".
With this, I have a language-neutral interface that I can put into a library. With language-neutral I mean that nearly every language can interact with C-interfaces. This can be done manually or you can write a 'converter' by parsing the C++-headers and auto-generating C glue-code.
I've done something similar while creating the fbsql Interbase/Firebird-API for Lisp, but that was all manual work.

Step 2

Then I'll need to find a way to create ILC-code that resembles this object oriented C-interface and creates a real class from it, fills the method implementations with stubs that load the library and finds the symbols to P/Invoke the functions, respectively.
An alternative to directly writing the ILC-code would be writing C#-code instead. But I'd like to eliminate the dependency on a C#-compiler by directly writing ILC-code, if this is not incredibly hard. I don't have a clue about how ILC looks like and how complex it is.

Direct solution

There is, however, an even more advanced approach. Theoretically I can parse C++-headers (with gcc-xml) and extract all classes and methods. I create ILC code from that which loads the library and the (mangled) symbols and do a bit voodoo to fake C++'s object mechanism. It's actually not that hard, I only need to allocate memory and call the ctor on that memory with the correct calling-convention to create an object.
I like having the C-interface, though, because it gives me more flexibility and I can use it from non-.NET-languages, too. This way I have a truly language-independent interface that should be easy to use from anywhere.

Wednesday, November 7, 2007

New AMD Phenom X4 fails to impress

The Phenom X4 is about to hit the shelves. There's been
word about pricing
already, but none about performance so far. Well, until now. There's a Crysis benchmark over at I think many expected a miracle happening on the AMD side, because their CPUs, humbly spoken, suck at this time. The results are disillusioning, though:
While the E6850, QX6850 and QX9650 (all 3GHz CPUs) run at the same average of 49 FPS, the Phenom X4 only does 46 FPS at the same clock-rate of 3GHz.
All intel CPUs use a 333MHz (quad-pumped) Front-Side-Bus and a low multiplicator of 9, while the Phenom X4 has a low FSB of 200MHz and a high multiplicator of 15. That means that the communication of the CPU to the periphery is 40% slower, which might, in my humble no hardware-geekish opinion, be the reason for the slowdown.
And now here's the even more demolishing opinion of mine: As you might know I own a Q6600 which is clocked at 2.4GHz. I overclocked it to 3GHz, the same clock-speed the benchmark used, and noticed virtually *no* difference. This could mean that the Phenom X4 is even slower than the Q6600 with 600Mhz less clock-speed!
It's still very early and I don't want to jump to conclusions as of yet, but I don't expect much from AMD's upcoming CPU generation.

Monday, November 5, 2007

Learning a foreign language? Try Anki!

There are many approaches to learn vocabulary of foreign languages, and one of those are flashcards. There's an awesome and at the same time lightweight software out there that helps you learning with flashcards. It's localized to English, Spanish, French, German, Japanese and Dutch and comes with example decks for Japanese and Russian words.
If you're interested, try Anki now at
It's totally free and even open-source.

Efficient and reliable file management

We all know Microsoft explorer. We all know the millions of products that copy it. We also know, from earlier times, the Commander software and we should at least have seen a few of the millions of products that copy it. None of them suits my needs, unfortunately.

Status Quo

Efficiency is everything in my daily life with computers. You should be able to accomplish every task with the minimal count of keystrokes, and have every task freely pausable, stoppable and resumable, if technically possible. And, of course, you need to parallelize tasks and have them reliable work next to each other.
For the Microsoft Explorer, for example, reliability is something unknown. In the default configuration, if one explorer window, or some seemingly unrelated system-task crashes, it takes all your copy operations with you. And it's impossible to resume copy operations with the Microsoft Explorer. I have to admit that the new version shipped with Windows Vista is a complete overhaul and does address some of the main issues I had with XP and earlier Windowses. First, I like the Task-dialogs. Task-dialogs give you big, recognizable buttons with labels that actually tell you what they do after you click them, including a small, detailed explanation beneath them. This is a good example for intuitive, yet non-intrusive user-interface design. But the new Explorer still fails at reliably resuming copy operations or predicting the results of those. For example, you can't specify whether you want the timestamps of the files to be copied stay the same or are updated to 'now'. When moving files, the timestamps don't change. When copying files, they're updated. But, especially when copying files when transferring some large amounts of data that's in daily use, I want the timestamps to stay the same! And it may actually be a big problem if they change, for backup-reasons. No chance with Explorer.


Additionally, I want to see both progress in bytes and percent. And I want to see speed in a unit like MByte/s or MBit/s and a reliable prediction of the ETA. Here, too, the new Vista Explorer does a better job than it's XP counterpart. I guess everybody already became victim of a 500kb file that, according to Explorer, would take 32038 days to copy from the CD to your hard disk.
Then, to increase throughput, you might want to fine-tune some parameters like the buffer-size. The buffer-size can be very important in copy operations depending on the source and destination. For example, for a network source from a SMB-share to your local hard-drive or from one hard-drive to the other, you should use very different buffer-sizes. The hard-drives should cope with much bigger buffers than the network-protocol-bound SMB transfer (this is still speculation at this time, though).
Finally resume-operations should be fully customizable. I want options like 'Always overwrite newer', 'Ask if size differs', 'Always overwrite smaller', 'Check file-contents if smaller' and much more. I need those for various situations. You use file-copy operations in many different scenarios that all require different behavior.
A great feature would be to postpone files that the software is unsure about to the end. It happened *so* often that I started a large (like a few hours) file-operation and came back just to see that it asked me for the fifth file whether I want to hug, seize or overwrite it. If it skipped that file and already copied all the others, it would've saved me hours of either my free-time or my work-time.


Let's take the Explorer-example, again. In Explorer, you can do everything with the keyboard. But maybe not as easily as you wanted. Jumping to a new address can become a hassle. I noticed that the hotkey to jump to the address-pane changed in every small update the explorer ever got. I think it was already set to each character that you can find in 'address', namely a, d, r, e and s. In the Vista-explorer, you don't even know how to jump to the breadcrumb-style address-pane. But I'll tell you a secret: It's Alt+D. For now, it might change next week.
But still, everything you can do in Explorer you can relatively "easy" do with they keyboard. But there are just things you can't do with Explorer. Selecting files by pattern? Nope. You can use the search-function as a workaround, which isn't optimal and only suits some limited needs. Imagine a dir filled with hundreds of seemingly random files and you want to delete all that start with an A and end in .xxx. Maybe only if they have the archive-bit set or have some special UNIX-permissions. Or only directories. You simply can't, and have to pick them by hand. I want a mixed Regular Expression and file-attribute filter to select my files to delete, copy or move them.


So here's a feature round-up that I demand from a file-management-software:
- Large number of options for file-transfers
- Resumable operations
- Independent, parallel operations
- Postponing files that it is not sure about
- User-interface completely keyboard-compatible
- Selection by regex and file-attribute filtering, maybe recursive
- Command-line as an alternative to GUI would be awesome!

High-performance, high usability SQL-List

This is about list-controls in graphical user interfaces that display query-contents from a sql-query. I think most of us know the hassles of scroll-bars in lists that are dynamically filled with result-sets from sql-queries. The problem with it is that after you began to fetch datasets, you don't know how many to come. It may be 20, or it may be 20.000.000.

Status Quo

So most application-writers are lazy and just fetch some more than fit into the list, and fetch more as the user tries to scroll down. That's actually not as lazy as it sounds, since even this simple solution is problematic to implement depending on GUI-toolkit or API. Hooking up with the correct scroll-bar-events and making it at least a bit bearable to the user can become quite a hassle in itself. And when you've managed to make it work, it's still far from usable: The user doesn't know how many datasets are displayed in the list, either, and the scroll-bar and other visual implications might let the user think that there are only a few datasets. After he scrolled down for 5 minutes, he might finally understand that there's much more data than he might initially have thought. There's the Ctrl+End shortcut that jumps to the end of lists. Most implementations display an hourglass-cursor or a progress-bar in this situation. It might be slow, because they fetch all the content in the list when you want to reach the end.

There are better ways to do this!

Incrementally filling the list in the background (a second thread) might or might not be the best solution to this problem. First, you definitely need an activity-implicator, like a small text saying "Fetching..." somewhere. And you need a way to cancel the fetching in case the user doesn't even remotely care about any but the first 30, 10 or even 2 datasets, or simply cancels the operation. The implementation should be smooth enough so the user doesn't actually care whether it's still fetching, or not, anyways. But being able to cancel it is not a bad idea, either.
Now to more detailed implementation suggestions: Fetching in the background is, of course, nice to have. The user can interact with the software as early, or even earlier, than with the naive 'fetch-first-20'-approach, and still, after some time, knows how many datasets there are. He can't immediately know how many there are, because it's technically impossible. You could do a select count(*) from table beforehand, but it's not even guaranteed the second select will yield the same number of datasets and it may cost a LOT of initial wait-time and up to double the server-load than the following approach. Fetching all available datasets takes some time, too, but we have no other choice but to let the user wait until we gathered the needed information. But it's good to let him interact with the parts of the list we already fetched.

Speeding it up

So there are two things you're eager to know: You need to know the contents of the first X datasets, where X is the number of datasets displayed in the list. And you need to know how many datasets are available in total. Fetching the first X datasets is easy, speeding up fetching all datasets is a bit tricky, though.
The trick is to fetch only the primary key of the relevant datasets. This will be magnitudes faster than fetching the whole dataset, which might include large blobs of text or pictures. Let's imagine a table "list" that consists of the fields ID (integer, primary key), Name (varchar) and Description (Text). The list is generated from the query select * from list. What the implementation does is convert this sql into two sql-statements: select id from list and select * from list where id=?. You execute the first query, and check after each fetch whether a dataset's content needs to be fetched. In the display-routine of the list, you mark each dataset that needs to be displayed as 'needs to be fetched' (if it isn't already available, of course). If there are datasets that need to be displayed, the fetching-thread stops, fetches the relevant datasets and sends a signal to the list that those datasets are ready to be displayed. This works remarkably well. The only pitfall is the thread-synchronization-issue, which might or might not be easily possible in your programming language of choice. The fetcher-thread signals new datasets to the list-view, so the list-view can increment it's item-count to update the scrollbar's appearance. It's wise not to do this after each fetch. If you did so, the signal-queue might fill up and slow down the application. You should only send an update-signal to the list every 500ms or so. This is easier on the eyes for the user and it'll yield better performance and usability. Depending on the list-implementation, the slider of the scrollbar should be usable even while filling the list. With Qt's QListView, for example, this works like a charm.


So we've managed to overcome every shortcoming that SQL-based lists usually yield:
- The user sees the first datasets immediately
- There's virtually no wait-time
- The total number of datasets will be available as fast as technically possible
- Only the content of datasets that need to be displayed will be fetched
- Even skipping to the end will not fetch every dataset
- The network-traffic is therefore kept MINIMAL for the desired behaviour

Monday, October 29, 2007

Overclocking the Q6600 on a P5N32-E SLI

So at friday my Q6600 (Core 2 Quad 2.4GHz with G0 stepping) and the ASUS P5N32-E SLI arrived and I was, of course, eager to try it out. Well, my system is water-cooled, so it's quite a hassle to change the mobo, of all parts.

Initial problems

First thing I noticed after installing the new stuff was that the PS/2 keyboard did not work. HOLY SHIT, it's a freaking PS/2 keyboard and it does not work with that motherboard! I was (and still am) very shocked about that. It's like building a car with the steering wheel not working. Just not as dangerous.

So I hooked up my USB-keyboard (which is not very comfortable, since it's a compact keyboard) and used that for the day. I installed XP and afterwards windows and, except some small networking quirk in XP, everything worked pretty darn well so far. Nice Vista drivers, even 64bit. It wasn't all that easy the first time I installed Vista on my older P5WD2-E Premium, since it hadn't that great Vista support (how could it if it was build before Vista being around, anyways).

Here we go

After a little bit of initial gaming on stock-clockspeeds I decided to test it's overclocking capabilities. Now here's some more information on my system, first:
It's watercooled and it's pretty well cool I'd say, since both CPU and GPU-temps are okay. The only temperature I'm worried about is the motherboard's: The BIOS says it's 46°C hot, which I think is just too much. I installed a big fan later and it dropped to under 40°C. I have watercooled 2GB (2x1GB) OCZ-RAM with 5-5-5-15 timings, DDR2-800. Then there's the XFX GeForce 8800 GTS 640MB which is clocked to 513MHz core stock, I run it at 700MHz for games and it doesn't even break a sweat with under 52°C. So I went for 3.0GHz for a start, and it worked nicely.


I simply changed the FSB to 333 (1333MHz quad-pumped). Windows boots, benchmarks run, everything's fine. Whenever I try to go just one MHz above 333, the system won't boot up. It won't even finish it's POST. I think that's somehow ridiculous, I should definitely be able to do more with that CPU, it's a G0-stepping, after all. Updating the BIOS just did not do good. After I've updated it to Revision 1205, I couldn't even get it to boot nicely with the 3,0GHz. The speeker beeped like crazy every second time I booted the PC and I reverted my BIOS back to to the second-oldest version, which is 1103. I might add that with the 1205, the PS/2 keyboard worked. It won't with the 1103 I'm currently running. Additionally, the newest BIOS gave crazy voltages to my CPU. It went to 1.38V, although the CPU is spec'ed to a max of 1.37V, iirc. With the older BIOS-revisions, the vcore was much lower.

My current computer-setup

After upgrading my box this weekend, it currently comes down to this:

CPU Core 2 Quad Q6600 watercooled, clocked at 3.0 GHz
Graphics GeForce 8800 GTS 640MB watercooled, XFX, clocked at 700/2200MHz
Mainboard nForce 680i SLI P5N32-E SLI
RAM 2GB DDR2-800 watercooled, OCZ