Casting?
/I'm writing some new teaching material at the moment. It is going roughly half as fast as I expected, which is about right in my experience.
Anyhoo, I'm doing casting, where you tell the compiler to convert from one type to another by putting the type in front:
int i, j;
float factor = (float) i / j;
I have to cast i to floating point in the sum, otherwise I get an integer division and no fractional part.
In other words, in the above code if the value of i is 1 and the value of j is 2 I want the value of factor (1/2) to be 0.5 (floating point division) rather than 0 (integer division). I get this result by casting i to float in the sum. C# uses floating point division with floating point operands, and everything comes out OK.
This is a standard computing thing, most languages provide support for casting. And I started to wonder why it is called casting? Popular wisdom seems to be that it is related to casting things in a foundry, where you pour liquid metal into a mould of a particular shape. The shape of the mould determines the result of the cast. So by casting you can change one thing into another.
However, I've thought of another way to look at it. You can think of casting as making a movie. You take an actor (Christian Bale) and cast him as a character (Batman). For the duration of the film the character will behave in terms of the role they have been cast into. This even works when we consider stupid casts. In C# you can't do things like cast a string into an integer. In films you can't do things like cast Christian Bale as City Hall. I quite like this way of looking things, but one thing does worry me. Maybe this is the original meaning, and it has taken me years to figure it out.....