- ANRDOID-ATOMICS
- ANDROID-MK
- APPLICATION-MK
- CHANGES
- CPLUSPLUS-SUPPORT
- CPU-ARCH-ABIS
- CPU-ARM-NEON
- CPU-FEATURES
- CPU-X86
- DEVELOPMENT
- HOWTO
- IMPORT-MODULE
- INSTALL
- LICENSES
- NATIVE-ACTIVITY
- NDK-BUILD
- NDK-GDB
- NDK-STACK
- OVERVIEW
- PREBUILTS
- STABLE-APIS
- STANDALONE-TOOLCHAIN
- SYSTEM-ISSUES
Tuesday, June 19, 2012
NDK Docs
Google has not placed its NDK documentation online. Consequently any NDK-related Google searches that might by answered by the NDK documentation will come up empty. I'm tired of this, so I've decided to put up some of these docs, at least the ones that are not mirrored elsewhere already.
Friday, June 15, 2012
Android ListView: Make Selection Stay
The behavior of Android's ListView is surprising: when the user clicks an item, it doesn't stay selected! It looks selected for a brief instant and then fades away.
Apparently the "disappearing selection" is by design; it's something called "touch mode". I read through that document and still I have no idea why they thought it was a good idea. My guess is that, since Android was originally designed for small-screen devices, they expected that you would fill the screen with a list and then, when the user clicks an item, move to a new list on a different screen. Thus, the user wouldn't be aware that Android lost track of the selected item.
But this behavior is quite annoying if, for example, you want the user to select an item and then show information about that item on the same screen. If the selection disappears, how is the user supposed to know what they clicked (assuming of course that users have the attention span of a goldfish)?
One possible solution is to change all the list items into radio buttons. I don't really like that solution because it wastes screen real estate. I'd rather just use the background color to show which item is selected. I have seen one solution so far but it is not quite complete or general. So here's my solution:
That's all! The above assumes you want single selection. With some small modifications to getView(), you could support multi-selection too, I guess, but you should probably use checkboxes instead.
Apparently the "disappearing selection" is by design; it's something called "touch mode". I read through that document and still I have no idea why they thought it was a good idea. My guess is that, since Android was originally designed for small-screen devices, they expected that you would fill the screen with a list and then, when the user clicks an item, move to a new list on a different screen. Thus, the user wouldn't be aware that Android lost track of the selected item.
But this behavior is quite annoying if, for example, you want the user to select an item and then show information about that item on the same screen. If the selection disappears, how is the user supposed to know what they clicked (assuming of course that users have the attention span of a goldfish)?
One possible solution is to change all the list items into radio buttons. I don't really like that solution because it wastes screen real estate. I'd rather just use the background color to show which item is selected. I have seen one solution so far but it is not quite complete or general. So here's my solution:
1. In your XML layout file,
Go to your ListView element and the following attribute: android:choiceMode="singleChoice". I'm not entirely sure what this does (by itself, it doesn't allow the user to select anything) but without this attribute, the code below doesn't work.2. Define the following class.
It is used to keep track of the selected item, and also allows you to simulate pass-by-reference in Java:
public class IntHolder {
public int value;
public IntHolder() {}
public IntHolder(int v) { value = v; }
}
public class IntHolder {
public int value;
public IntHolder() {}
public IntHolder(int v) { value = v; }
}
3. Put the following code somewhere
I'll assume you put it in your Activity, but it could go in any class really:
static void setListItems(Context context, AdapterView listView, List listItems, final IntHolder selectedPosition) { setListItems(context, listView, listItems, selectedPosition, android.R.layout.simple_list_item_1, android.R.layout.simple_spinner_dropdown_item); } static void setListItems(Context context, AdapterView listView, List listItems, final IntHolder selectedPosition, int list_item_id, int dropdown_id) { listView.setOnItemClickListener(new AdapterView.OnItemClickListener() { public void onItemClick(AdapterView<?> list, View lv, int position, long id) { selectedPosition.value = position; } }); ArrayAdapter<CharSequence> adapter = new ArrayAdapter<CharSequence>(context, list_item_id, listItems) { @Override public View getView(int position, View convertView, ViewGroup parent) { View itemView = super.getView(position, convertView, parent); if (selectedPosition.value == position) itemView.setBackgroundColor(0xA0FF8000); // orange else itemView.setBackgroundColor(Color.TRANSPARENT); return itemView; } }; adapter.setDropDownViewResource(dropdown_id); listView.setAdapter(adapter); }
That's all! The above assumes you want single selection. With some small modifications to getView(), you could support multi-selection too, I guess, but you should probably use checkboxes instead.
Warning: this solution needs further development. If the user uses arrow keys or buttons to select an item, that item will not be selected from the IntHolder's perspective. If the user presses the unlabeled button or the Enter key then the item will become "officially" selected, but then you have another problem because if the user uses the arrow keys again, it will sort of look like two items are selected. Leave a comment if you figure out how to keep the "internal selection" in the IntHolder synchronized with the "keyboard selection" or whatever it's called. What is it called, anyway?
Friday, June 8, 2012
The Delightful D Programming Language
D versus C# and C++
I became a D enthusiast yesterday, when I learned how much better it is than C++, and I have been studying D for two days straight out of sheer love. Oh, it's not perfect, but compared to C++? No contest. Ditto for Java. C# was my language of choice as of 3 days ago, but today I think it has moved down a rank.
Not having used D for any serious work yet, I could be mistaken. But D has an answer to every major criticism commonly raised against C++, from compilation time, to poor type safety, to the headache of maintaining header files, to slow compilation. D isn't just an evolutionary improvement, it has innovations that don't exist in any of the world's popular languages*:
- It is said to have one of the world's fastest compilers
- You can use try/catch/finally and RAII, but
scope(exit)
makes exception-safe code easier to read and write - You can define closures that the compiler can inline (do any C++11 compilers do that? I'm not sure, I'm stuck on Visual C++ 2008 due to a need to support Windows CE)
- Garbage Collection is standard but optional, so you can write programs with low-latency guarantees by avoiding GC allocations (but how to manage memory instead? I suspect one could use
alias this
to make smart pointers a la C++?) - Slices and ranges, a collection access mechanism that is far safer than C++ iterators and also far more convenient, no need to repeat yourself as in
lower_bound(blobCollection.begin(), blobCollection.end(), blob)
- Generics are more flexible than in C++ and don't produce pages of error messages
- Compile-time metaprogramming that vastly outclasses C++ (and obviously C# too)
- Compile-time reflection which (I hope, but can't confirm) one could use to build a run-time reflection system if one wanted
- A well-designed, multi-paradigm approach to concurrency with interesting features for both shared-memory and message-passing architectures
- Built-in support for unit tests
- Array-wise expressions, e.g. a[] = (b[] + c[]) / 2 (MATLAB does this more tersely, but among general-purpose languages this kind of feature is rare)
- Superior floating-point features (e.g. nextUp()/nextDown()/ulp(), hex floats, control of hardware exceptions)
* (Other less-popular languages have some of these features, but certainly not all of them)
And since D is so similar to C++ and C#, you wouldn't have to spend a lot of time learning it, so why not? Plus, it shouldn't be that hard to port small programs and libraries from C++.
Admittedly, I'm not happy with everything. They are still catering to the C crowd, so you still have to fill your "switch" statements with "breaks", the "static" keyword is confusingly overused, lambdas require braces and a "return" statement (c.f. C#'s quick x -> x+1 syntax), all functions and try/catches requires braces, pass-by-reference is implicit at the call site, the operator precedence of (&, |, ^) is foolish, there's no pattern matching or algebraic data types (but at least there are tuples), if statements are not expressions... still, what D offers is too good to pass up.
Admittedly, I'm not happy with everything. They are still catering to the C crowd, so you still have to fill your "switch" statements with "breaks", the "static" keyword is confusingly overused, lambdas require braces and a "return" statement (c.f. C#'s quick x -> x+1 syntax), all functions and try/catches requires braces, pass-by-reference is implicit at the call site, the operator precedence of (&, |, ^) is foolish, there's no pattern matching or algebraic data types (but at least there are tuples), if statements are not expressions... still, what D offers is too good to pass up.
But of course, while the D language is clearly terrific, and the standard library has apparently matured, the surrounding tools might not be so good: IDEs, support for smartphone platforms, etc. The only IDE I tried, Visual D (IDE plugin for Visual Studio) works pretty well, including debugging which seems to work as well as the Visual C++ debugger, and which can step into the standard library (fun!). However, Code Completion doesn't work very well yet.
Compared to C#, D is better in most areas but seems weak when it comes to dynamic linking and reflection. For example, an IDE written in .NET or Java could easily have a plug-in system, but I'm not so sure about D. .NET also offers runtime code generation while D does not. However, a research compiler exists to compile D to .NET code. Given that C++/CLI already compiles to .NET (C++/CLI), perhaps someday one will be able use D equally well for managed and native code (with a small performance hit in managed land, of course.)
Interoperability with C/C++ and .NET are pretty decent. D is supposed to interoperate with C++ functions and singly-inherited classes via
Still don't know D? Go forth and learn it! Note: okay, to be honest the documentation isn't that great. It really helps to read The D Programming Language.
extern (C++)
and C++ name mangling (but which compiler's name mangling?), while you can easily create COM interfaces callable from .NET and other languages.Still don't know D? Go forth and learn it! Note: okay, to be honest the documentation isn't that great. It really helps to read The D Programming Language.
Thursday, June 7, 2012
Smart Tabs
Man, I sure wish programming text editors would support those smart tabs known as "elastic tabs".
Use Visual Studio? Vote for elastic tabs.

Use Visual Studio? Vote for elastic tabs.
Thursday, April 5, 2012
Shouldn't you learn that new Microsoft API?
I recently responded to someone trying to justify learning all those newfandangled Microsoft APIs. It sounded like a blog entry so I'm copy-pasting here.
While you should definitely learn some of the new MS technologies, my recent experience learning about things like WPF and WCF has made me a bit more cautious about learning the newest MS APIs.
Now, LINQ is a major boon to productivity that you should definitely be using; the nice thing is that you can introduce LINQ-to-objects piecemeal in all sorts of random situations, with or without the actual query syntax (I commonly call `Where()` without the `from-select` syntax, as it is often shorter.) The mathematical dual of LINQ, Reactive Extensions, is something you should be aware of, although I am still struggling to find a really good use case. Likewise, all features of C# 3/4/5 are useful so you should study them and watch for places where they will be useful, even as you continue using the "old" BCL stuff.
However, I'm going to take devil's advocate and suggest that the very newest and biggest MS libraries, such as WCF and WPF are not necessarily worth learning at all.
The primary reason is that they are huge, and not especially well-designed (the former being a symptom of the latter). I briefly blogged recently about why WPF sucks. As for WCF, the whitepaper makes it sound like it will easily interoperate nicely with "Java EE server running on a non-Windows system" and "Partner applications running on a variety of platforms", but the truth is that WCF APIs are very specifically SOAP-oriented and have very limited support for non-SOAP protocols. MS could have easily designed a general system that allows pluggable protocols, and maybe the capability is hidden (undocumented) in there somewhere, but as far as I can tell they chose to design a much more limited system that can only do SOAP and limited HTTP (as long as your message body is a serialized .NET object, IIRC). I researched Entity Framework more briefly, but noticed some complaints that it was incapable of supporting some scenarios that the (much simpler) LINQ-to-SQL can handle out-of-the-box.
IMO the design of all of these libraries are fundamentally flawed because they use a lot of components that are tightly coupled to each other, a dependency graph of the classes in each framework would probably be huge and look like a jumbled mess of scribbly lines. And even if the design were good, we would not be able to tell because there aren't public architectural documents that delve into the lower-level details, and the MSDN docs for the most part aren't very good (they tend to get less and less helpful as you look at lower and lower-level classes.)
The sheer size of the libraries also seems like a flaw; I have learned over 20 years of programming that simplicity is a virtue, one that Microsoft has never valued.
But you might ask, "so what"? Well, with such big libraries you might never feel like you really understand them. That means that when you want to do something outside the use cases for which Microsoft specifically designed WCF/WPF/EF, you're not going to know how, and it is possible that no one outside Redmond will know how, either. And when something goes wrong, you will have a hard time figuring out what went wrong. And 15 years from now when Microsoft has moved on to their next next-generation API, no one is going to enjoy maintaining software built on a foundation that is so poorly-understood.
Also because of the largeness and complexity of these new APIs, the cross-platform alternative to .NET, Mono, has poor support or no support for them. You'll probably have little difficulty using your typed data tables on Linux or Mac, but Entity Framework? Forget about it. I wouldn't be surprised if Mono never supports it.
I have used LINQ-to-SQL in a new project and it's not bad. In some ways it could be better, but I think the developer experience is substantially better than ADO.NET. One significant limitation: L2S is easiest by far if you modify tables in "connected" mode, unlike the old ADO.NET which was specifically designed to work without an active database connection. Anyway, since LINQ-to-SQL is (according to Mono people) a quarter the size of Entity Framework, it's a shame that MS decided to quit working on it.
While you should definitely learn some of the new MS technologies, my recent experience learning about things like WPF and WCF has made me a bit more cautious about learning the newest MS APIs.
Now, LINQ is a major boon to productivity that you should definitely be using; the nice thing is that you can introduce LINQ-to-objects piecemeal in all sorts of random situations, with or without the actual query syntax (I commonly call `Where()` without the `from-select` syntax, as it is often shorter.) The mathematical dual of LINQ, Reactive Extensions, is something you should be aware of, although I am still struggling to find a really good use case. Likewise, all features of C# 3/4/5 are useful so you should study them and watch for places where they will be useful, even as you continue using the "old" BCL stuff.
However, I'm going to take devil's advocate and suggest that the very newest and biggest MS libraries, such as WCF and WPF are not necessarily worth learning at all.
The primary reason is that they are huge, and not especially well-designed (the former being a symptom of the latter). I briefly blogged recently about why WPF sucks. As for WCF, the whitepaper makes it sound like it will easily interoperate nicely with "Java EE server running on a non-Windows system" and "Partner applications running on a variety of platforms", but the truth is that WCF APIs are very specifically SOAP-oriented and have very limited support for non-SOAP protocols. MS could have easily designed a general system that allows pluggable protocols, and maybe the capability is hidden (undocumented) in there somewhere, but as far as I can tell they chose to design a much more limited system that can only do SOAP and limited HTTP (as long as your message body is a serialized .NET object, IIRC). I researched Entity Framework more briefly, but noticed some complaints that it was incapable of supporting some scenarios that the (much simpler) LINQ-to-SQL can handle out-of-the-box.
IMO the design of all of these libraries are fundamentally flawed because they use a lot of components that are tightly coupled to each other, a dependency graph of the classes in each framework would probably be huge and look like a jumbled mess of scribbly lines. And even if the design were good, we would not be able to tell because there aren't public architectural documents that delve into the lower-level details, and the MSDN docs for the most part aren't very good (they tend to get less and less helpful as you look at lower and lower-level classes.)
The sheer size of the libraries also seems like a flaw; I have learned over 20 years of programming that simplicity is a virtue, one that Microsoft has never valued.
But you might ask, "so what"? Well, with such big libraries you might never feel like you really understand them. That means that when you want to do something outside the use cases for which Microsoft specifically designed WCF/WPF/EF, you're not going to know how, and it is possible that no one outside Redmond will know how, either. And when something goes wrong, you will have a hard time figuring out what went wrong. And 15 years from now when Microsoft has moved on to their next next-generation API, no one is going to enjoy maintaining software built on a foundation that is so poorly-understood.
Also because of the largeness and complexity of these new APIs, the cross-platform alternative to .NET, Mono, has poor support or no support for them. You'll probably have little difficulty using your typed data tables on Linux or Mac, but Entity Framework? Forget about it. I wouldn't be surprised if Mono never supports it.
I have used LINQ-to-SQL in a new project and it's not bad. In some ways it could be better, but I think the developer experience is substantially better than ADO.NET. One significant limitation: L2S is easiest by far if you modify tables in "connected" mode, unlike the old ADO.NET which was specifically designed to work without an active database connection. Anyway, since LINQ-to-SQL is (according to Mono people) a quarter the size of Entity Framework, it's a shame that MS decided to quit working on it.
Saturday, February 25, 2012
Design flaws in .NET
I made this list of the flaws in .NET for StackOverflow, but then I noticed that the question is closed. Whatever, I may as well CC to my blog. All of these flaws except #3 have irked me personally. .NET is my favorite platform, yet it is still far from ideal.
- There are no subsets of
ICollection<T>
andIList<T>
; at minimum, a covariant read-only collection interfaceIListSource<out T>
(with an enumerator, indexer and Count) would have been extremely useful. - .NET does not support weak delegates. The workarounds are clumsy at best, and listener-side workarounds are impossible in partial trust (ReflectionPermission is required).
- Generic interface unification is forbidden even when it makes sense and cause no problems.
- Unlike in C++, covariant return types are not allowed in .NET
- It is not possible to bitwise-compare two value types for equality. In a functional "persistent" data structure, I was writing a
Transform(Sequence<T>, Func<T,T>)
function that needed to quickly determine whether the function returns the same value or a different value. If the function does not modify most/all of its arguments, then the output sequence can share some/all memory from the input sequence. Without the ability to bitwise compare any value type T, a much slower comparison must be used, which hurts performance tremendously. - .NET doesn't seem able to support ad-hoc interfaces (like those offered in Go or Rust) in a performant manner. Such interfaces would have allowed you to cast
List<T>
to a hypotheticalIListSource<U>
(where T:U) even though the class doesn't explicitly implement that interface. There are at least three different libraries (written independently) to supply this functionality (with performance drawbacks, of course--if a perfect workaround were possible, it wouldn't be fair to call it a flaw in .NET). - Other performance issues: IEnumerator requires two interface calls per iteration. Plain method pointers (IntPtr-sized open delegates) or value-typed delegates (IntPtr*2) are not possible. Fixed-size arrays (of arbitrary type T) cannot be embedded inside classes. There is no
WeakReference<T>
(you can easily write your own, but it will use casts internally.) - The fact that identical delegate types are considered incompatible (no implicit conversion) has been a nuisance for me on some occasions (e.g.
Predicate<T>
vsFunc<T,bool>
). I often wish we could have structural typing for interfaces and delegates, to achieve looser coupling between components, because in .NET it is not enough for classes in independent DLLs to implement the same interface--they must also share a common reference to a third DLL that defines the interface. DBNull.Value
exists even thoughnull
would have served the same purpose equally well.- C# has no ??= operator; you must write
variable = variable ?? value
. Indeed, there are a few places in C# that needlessly lack symmetry. For example you can writeif (x) y(); else z();
(without braces), but you can't writetry y(); finally z();
. - When creating a thread, it is impossible to cause the child thread to inherit thread-local values from the parent thread. Not only does the BCL not support this, but you can't implement it yourself unless you create all threads manually; even if there were a thread-creation event, .NET can't tell you the "parents" or "children" of a given thread.
- The fact that there are two different length attributes for different data types, "Length" and "Count", is a minor nuisance.
- I could go on and on forever about the poor design of WPF... and WCF (though quite useful for some scenarios) is also full of warts. In general, the bloatedness, unintuitiveness, and limited documentation of many of the BCL's newer sublibraries makes me reluctant to use them. A lot of the new stuff could have been far simpler, smaller, easier to use and understand, more loosely coupled, better-documented, applicable to more use cases, faster, and/or more strongly typed.
- I'm often been bitten by the needless coupling between property getters and setters: In a derived class or derived interface, you can't simply add a setter when the base class or base interface only has a getter; if you override a getter then you are not allowed to define a setter; and you can't define the setter as virtual but the getter as non-virtual.
Monday, November 14, 2011
A taste of hardware programming
Have you ever tried hardware programming with Verilog or VHDL? During University I had to program FPGAs with Verilog, and it struck me that hardware programming was a lot like software programming, but at the same time much different also. Instead of consuming plentiful RAM, hardware programming consumes a much more scarce resource (transistors) instead. Physics can cause your program to malfunction, but aside from that, I really enjoyed my one little hardware programming course.
The fundamental difference between hardware and software is that every logic gate inherently operates in parallel, unlike software which inherently operates sequentially, and requires a special mechanism--threads--in order to do tasks in parallel. Traditionally, writing multithreaded programs has been quite difficult, due to the danger of race conditions, the danger of deadlocks, forgetting to lock data structures, and so on. So it was a little surprising that such problems don't necessarily happen in hardware programming. In fact I found that the biggest assignment I was given (to make an "alarm system") was relatively fun and easy. And what made it relatively fun and easy was (1) that the solution was small enough to fit in our FPGAs without much effort, and (2) I didn't have to "update" variables.
For instance, when the user activates the alarm system, she has 15 seconds to exit the building before the device is "armed". To make this work in a software program, you might write an "event" method and configure some sort of timer to call the method each second. When the event happens, you would check whether we're in a "countdown mode", and if so, decrement a variable representing the number of seconds left, then display the new number of seconds on the screen by explicitly calling a method to update the screen.
In a hardware program, it works a bit differently. It's been a few years since I did this in hardware, but I'll give you the gist of it. You don't need a timer "event"; instead, there is a hardware clock running at a known speed, which I'll say is 10 MHz, i.e. 10 million clock cycles per second or 0.1 microsecond per tick. So I created a counter that takes the hardware clock as input and restarts every 1/60 of a second (because part of the user interface updates 60 times per second), and when it restarts, I programmed a particular wire to be "on" (1 bit) during that clock cycle only (IIRC). I used that signal as input to a second counter that restarts every second and similarly sets a particular wire to "on" during one clock cycle per second (out of 10 million). I defined a third counter to represent the countdown, which decreases each second if the countdown is active. The counter is kept in BCD format (binary-coded decimal, 4 bits per decimal digit) so that no math is required to convert the digits into a format suitable for display to the user.
The "screen" was a pair of 7-segment displays, i.e. a display that can show two characters (the numbers 0123456789 and, if you're creative, the letters AbCdEFgHIJLnoPqrStUy.) The "screen" showed only 16 bits of information total (two digits and two dots), so I simply connected 16 of the FPGA's metal pins directly to the 7-segment display. To draw something on the screen, my hardware program merely had to set those pins to "on" or "off" as appropriate.
I didn't have to execute any "commands" to make a number appear on the seven-segment (numeric) display. Instead, I wrote "functions" (not really functions, but blocks of code that represent circuits or sets of circuits) that did the following:
Instead I simply declared that
The fact that stuff happens automatically, without you having to remember to issue "commands", makes programming easier, and thus hardware programming is at times easier than software programming... until you run into the physical limitations of your device. My "alarm system" had a two-character display, so I simply made a duplicate copy of the "function" that converts the 5-bit representation to 7 bits. That way, the two copies could run in parallel. This approach was viable because the display was only two digits; if the display had 10 digits or more, 10 copies of the function might not fit on the chip. If I wanted to save space, I would create only one instance of the function, and then add a special circuit to "call" the function on each digit in sequence (e.g. a different digit every clock cycle). In that case I would need to create registers to save the result of running the function on each digit. Moreover, the chip doesn't have enough metal pins to represent 10 digits, so the screen would define some communication protocol that the chip would have to follow. Potentially, it could get messy. But, the tasks that students were given were fairly easy and fun.
Ever since I tasted hardware programming (and probably before then, actually), I have wished that software could offer the same kind of easy, automatic updates, so that variables inside the program automatically propagate to the screen, or to whereever they are supposed to go. Data binding provides part of the answer, but the problem is that you don't usually want to display your program's variables directly on the screen; you want to filter, sort, and format those variables first. If the user modifies what's on the screen, reverse transformations are needed to figure out how the internal variables should be changed. Recently, I discovered that there is a .NET library that (if used correctly) will provide these wonderful automatic updates. That library is called Update Controls (at CodePlex).
Functional languages like Haskell and F# also make programming easier, generally speaking, and like hardware programming, functional languages are sufficiently different from popular "imperative" languages that it feels like something altogether different. However, functional languages don't tend to represent graphical user interfaces (GUIs) in a natural way. For GUIs, all hail Update Controls.
The fundamental difference between hardware and software is that every logic gate inherently operates in parallel, unlike software which inherently operates sequentially, and requires a special mechanism--threads--in order to do tasks in parallel. Traditionally, writing multithreaded programs has been quite difficult, due to the danger of race conditions, the danger of deadlocks, forgetting to lock data structures, and so on. So it was a little surprising that such problems don't necessarily happen in hardware programming. In fact I found that the biggest assignment I was given (to make an "alarm system") was relatively fun and easy. And what made it relatively fun and easy was (1) that the solution was small enough to fit in our FPGAs without much effort, and (2) I didn't have to "update" variables.
For instance, when the user activates the alarm system, she has 15 seconds to exit the building before the device is "armed". To make this work in a software program, you might write an "event" method and configure some sort of timer to call the method each second. When the event happens, you would check whether we're in a "countdown mode", and if so, decrement a variable representing the number of seconds left, then display the new number of seconds on the screen by explicitly calling a method to update the screen.
In a hardware program, it works a bit differently. It's been a few years since I did this in hardware, but I'll give you the gist of it. You don't need a timer "event"; instead, there is a hardware clock running at a known speed, which I'll say is 10 MHz, i.e. 10 million clock cycles per second or 0.1 microsecond per tick. So I created a counter that takes the hardware clock as input and restarts every 1/60 of a second (because part of the user interface updates 60 times per second), and when it restarts, I programmed a particular wire to be "on" (1 bit) during that clock cycle only (IIRC). I used that signal as input to a second counter that restarts every second and similarly sets a particular wire to "on" during one clock cycle per second (out of 10 million). I defined a third counter to represent the countdown, which decreases each second if the countdown is active. The counter is kept in BCD format (binary-coded decimal, 4 bits per decimal digit) so that no math is required to convert the digits into a format suitable for display to the user.
The "screen" was a pair of 7-segment displays, i.e. a display that can show two characters (the numbers 0123456789 and, if you're creative, the letters AbCdEFgHIJLnoPqrStUy.) The "screen" showed only 16 bits of information total (two digits and two dots), so I simply connected 16 of the FPGA's metal pins directly to the 7-segment display. To draw something on the screen, my hardware program merely had to set those pins to "on" or "off" as appropriate.
I didn't have to execute any "commands" to make a number appear on the seven-segment (numeric) display. Instead, I wrote "functions" (not really functions, but blocks of code that represent circuits or sets of circuits) that did the following:
- Examine the state of the device and, if a countdown is in progress, produce the countdown value as output (the output is 5 bits per digit while the input is 4 bits per digit, so I just set the extra bit to 0). If a countdown is not in progress then something else is selected as output, such as part of a message (e.g. "rEAdy" might be scrolling across the screen. Actually, displaying anything but numbers was not part of the assignment; I just added the ability to display messages for fun.)
- Those two 5-bit numbers become the input to an output mapper, which converts each 5-bit number to a 7-bit character.
- The 7-bit character is passed directly to the output pins, unless it is modified to display an edit cursor (another visual flourish that was not part of the assignment, and not relevant to this blog post either, for that matter).
Instead I simply declared that
- the thing that does the countdown is wired to the thing that selects a 5-bit output;
- that thing is wired to a pair of things that select 7-bit outputs; and
- the two 7-bit outputs are wired to the thing that chooses the 16-bit output.
The fact that stuff happens automatically, without you having to remember to issue "commands", makes programming easier, and thus hardware programming is at times easier than software programming... until you run into the physical limitations of your device. My "alarm system" had a two-character display, so I simply made a duplicate copy of the "function" that converts the 5-bit representation to 7 bits. That way, the two copies could run in parallel. This approach was viable because the display was only two digits; if the display had 10 digits or more, 10 copies of the function might not fit on the chip. If I wanted to save space, I would create only one instance of the function, and then add a special circuit to "call" the function on each digit in sequence (e.g. a different digit every clock cycle). In that case I would need to create registers to save the result of running the function on each digit. Moreover, the chip doesn't have enough metal pins to represent 10 digits, so the screen would define some communication protocol that the chip would have to follow. Potentially, it could get messy. But, the tasks that students were given were fairly easy and fun.
Ever since I tasted hardware programming (and probably before then, actually), I have wished that software could offer the same kind of easy, automatic updates, so that variables inside the program automatically propagate to the screen, or to whereever they are supposed to go. Data binding provides part of the answer, but the problem is that you don't usually want to display your program's variables directly on the screen; you want to filter, sort, and format those variables first. If the user modifies what's on the screen, reverse transformations are needed to figure out how the internal variables should be changed. Recently, I discovered that there is a .NET library that (if used correctly) will provide these wonderful automatic updates. That library is called Update Controls (at CodePlex).
Functional languages like Haskell and F# also make programming easier, generally speaking, and like hardware programming, functional languages are sufficiently different from popular "imperative" languages that it feels like something altogether different. However, functional languages don't tend to represent graphical user interfaces (GUIs) in a natural way. For GUIs, all hail Update Controls.
Wednesday, July 27, 2011
I want a conditional dot operator
I was writing earlier about features I'd like to have in the .NET Framework and C#, but I forgot a couple. Earlier I wrote about the "slide operator". Now, I'd like to propose the conditional dot or "null dot" operator.
Some people will say this idea makes the language too complex, but I can't seem to get enough language features. I use just about all of C#'s existing features, including the "hidden" ones. And still I can think of many unsupported features that I know would improve productivity, code clarity, or performance... at least for me.
I think it's okay for C# to have tons of features, but the IDE needs to help teach people what they mean. For example, Visual Studio should show the name of an operator in a tooltip when mousing over it, and show a help page for it if the user puts the cursor on it and presses F1.
C# already has a handy "??" operator which lets you choose a default value in case the "first choice" is null:
If the method you want to call returns a value, the "??." operator would substitute null for the return value if the class reference is null. For example,
The operator should also help invoke events:
Right now I am working with code that often converts objects to strings (the objects are usually strings, but may be something else). The object is sometimes null, so I write
I know that an operator like this exists in some other languages, but I don't which ones at the moment. Anybody remember?
P.S. Microsoft somehow forgot to include a compound assignment operator, which should work like the other compound assignment operators.
CodeProject
Some people will say this idea makes the language too complex, but I can't seem to get enough language features. I use just about all of C#'s existing features, including the "hidden" ones. And still I can think of many unsupported features that I know would improve productivity, code clarity, or performance... at least for me.
I think it's okay for C# to have tons of features, but the IDE needs to help teach people what they mean. For example, Visual Studio should show the name of an operator in a tooltip when mousing over it, and show a help page for it if the user puts the cursor on it and presses F1.
C# already has a handy "??" operator which lets you choose a default value in case the "first choice" is null:
Console.WriteLine("Your name is {0}.", firstName ?? "(unknown)");I use this feature somewhat often. But something's missing. I would like, in addition, a "conditional dot" operator, another kind of null guard that deals with cases where an object you want to access might be null. For example, let's say your class has a reference to another class, and you'd like to inform it when something happened:
if (referenceToAnotherClass != null)
referenceToAnotherClass.OnSomethingHappened(info);
Of course you can't call the method if the referenceToAnotherClass is null. It would be nice if we could shorten this to something likereferenceToAnotherClass??.OnSomethingHappened(info);
If the method you want to call returns a value, the "??." operator would substitute null for the return value if the class reference is null. For example,
// firstName might be null; length will be null if firstName is null.
int? length = firstName??.Length;
It would be very natural to combine the "??." operator with the existing "??" operator:// Equivalent to firstName != null ? firstName.Length : 0
int length = firstName??.Length ?? 0;
This operator would be most powerful when it is chained together, or used to avoid creating temporary variables:// If "DatabaseConnection", "PersonTable", and "FirstRow" can all
// return null, chaining "??." simplifies your code a lot.
var firstName = DatabaseConnection??.Tables.PersonTable??.FirstRow??.Name;
// Equivalent to:
string firstName = null;
var dbc = DatabaseConnection;
if (dbc != null) {
var pt = dbc.Tables.PersonTable;
if (pt != null) {
var fr = pt.FirstRow;
if (fr != null)
firstName = fr.Name;
}
}
The operator should also help invoke events:
public event EventHandler Click;
protected void OnClick()
{
Click??.(this, EventArgs.Empty);
// equivalent to:
if (Click != null)
Click(this, EventArgs.Empty);
// Note: the dot in "??." is still required because "X??(Y)"
// would be indistinguishable from the null coalescing operator.
}
Somebody implemented a "null dot" extension method that provides this kind of "operator" in C#, except that it only supports one out of the four cases I just described, as it requires that the function you want to call return a reference type; it doesn't support void or struct return values. It's also slightly clumsy, and since it relies on a lambda, it hurts performance. To work well, this feature needs language support.Right now I am working with code that often converts objects to strings (the objects are usually strings, but may be something else). The object is sometimes null, so I write
string s = (obj ?? "").ToString();
This works fine, but it's less efficient than it could be, because if obj == null, a virtual call to ToString() will be called on the empty string "". If the "null dot" or "conditional dot" operator existed, I would write this code asstring s = obj??.ToString() ?? "";
or evenstring s = obj??.ToString();
if a null result is acceptable.I know that an operator like this exists in some other languages, but I don't which ones at the moment. Anybody remember?
P.S. Microsoft somehow forgot to include a compound assignment operator, which should work like the other compound assignment operators.
twosies += 2; // equivalent to twosies = twosies + 2
doubled *= 2; // equivalent to doubled = doubled * 2
// ensure myList is not null
myList ??= new List<int>(); // myList = myList ?? new List<int>()
CodeProject
Tuesday, July 5, 2011
C++ vs C# Speed Benchmark, v.2
I just finished the second edition of my detailed benchmark comparing C++ and .NET performance. Hope you like it.
Why WPF sucks: an overview
I was just reading this article detailing someone's frustrations with WPF, and thought I should add to the chorus.
I agree with Mr. Mitchell, I'm fairly sure I will never like WPF. It has a steep learning curve because it's undiscoverable. WinForms was relatively easy to understand; its most complex feature was data binding. But WPF relies heavily on XAML, a language in which MS somehow expects you to "just know" what to type to get what you want, and when you somehow figure out what to type, half the time it doesn't work and you get a mysterious unhelpful exception (or no exception, but it still doesn't work.)
In WinForms, there was an easy-to-use designer and anything you did was translated to easy-to-understand C# (or VB) code; in fact that was the native representation! XAML, however, can't be translated to anything except BAML, so nobody can easily understand what a piece of XAML does, nor can we step through it in the debugger. To make matters worse, XAML relies heavily on dynamic typing (and even the C# part constantly makes you cast "object"s). This prevents IntelliSense from working very well, thwarts the refactoring tools (ever tried renaming a property that was bound in XAML?), and shifts tons of errors from compile-time to run-time.
In short, WPF programs look pretty, and offer better options for data visualization than WinForms (DataTemplates in ListBoxes are a major improvement), but beyond that they are a huge step in the wrong direction. How could Microsoft think this design was a good idea? If any company but Microsoft or Apple made a GUI architecture that was this difficult to use, no one would want to use it. For my company I started to develop some ideas for a GUI architecture that would have been much leaner and more elegant than WPF, but it looks like I may never write that code. The point is, I have some sense of how a GUI framework should work, and it's not like WPF.
I remember my first WPF app. I had an ItemsControl with a ControlTemplate containing a ScrollViewer with a Grid inside (note the learning curve: you can't use WPF very well without some idea what those terms mean.) I wanted to know: "How do I attach mouse event handlers to an ItemsControl to detect mouse clicks and drags on blank space?" The answer? In order to get mouse events with meaningful mouse coordinates (i.e. coordinates in scrollable space), it was necessary to obtain a reference to the grid using a strange incantation:
In another case I had to use this incantation:
Update: Also, WPF is shockingly inefficient, as you may learn if you need to use a non-virtualized ListBox in order to get MVVM to work and the list has more than about 1000 items. Trivial items (short text strings, no DataTemplate) consume about 8 KB of memory per row, or 80 MB for 10000 items, and ListBox takes 4 seconds to handle a single arrow key pressed in a list with this many items.
I agree with Mr. Mitchell, I'm fairly sure I will never like WPF. It has a steep learning curve because it's undiscoverable. WinForms was relatively easy to understand; its most complex feature was data binding. But WPF relies heavily on XAML, a language in which MS somehow expects you to "just know" what to type to get what you want, and when you somehow figure out what to type, half the time it doesn't work and you get a mysterious unhelpful exception (or no exception, but it still doesn't work.)
In WinForms, there was an easy-to-use designer and anything you did was translated to easy-to-understand C# (or VB) code; in fact that was the native representation! XAML, however, can't be translated to anything except BAML, so nobody can easily understand what a piece of XAML does, nor can we step through it in the debugger. To make matters worse, XAML relies heavily on dynamic typing (and even the C# part constantly makes you cast "object"s). This prevents IntelliSense from working very well, thwarts the refactoring tools (ever tried renaming a property that was bound in XAML?), and shifts tons of errors from compile-time to run-time.
In short, WPF programs look pretty, and offer better options for data visualization than WinForms (DataTemplates in ListBoxes are a major improvement), but beyond that they are a huge step in the wrong direction. How could Microsoft think this design was a good idea? If any company but Microsoft or Apple made a GUI architecture that was this difficult to use, no one would want to use it. For my company I started to develop some ideas for a GUI architecture that would have been much leaner and more elegant than WPF, but it looks like I may never write that code. The point is, I have some sense of how a GUI framework should work, and it's not like WPF.
I remember my first WPF app. I had an ItemsControl with a ControlTemplate containing a ScrollViewer with a Grid inside (note the learning curve: you can't use WPF very well without some idea what those terms mean.) I wanted to know: "How do I attach mouse event handlers to an ItemsControl to detect mouse clicks and drags on blank space?" The answer? In order to get mouse events with meaningful mouse coordinates (i.e. coordinates in scrollable space), it was necessary to obtain a reference to the grid using a strange incantation:
Grid grid = (Grid)_itemsControl.Template.FindName("Panel", _itemsControl);Then I attached event handlers to the grid, and inside the mouse event handlers, get the mouse coordinates w.r.t. the grid using
Point p = e.GetPosition((IInputElement)sender);Plus, in order to get mouse events on the entire surface, the control (actually the grid) must have a background, so my transparent control still wasn't getting events.
In another case I had to use this incantation:
var r = VisualTreeHelper.HitTest(_panel, new Point(e.X, e.Y)); var hit = r == null ? null : r.VisualHit as FrameworkElement;In WinForms, pretty much all the methods you need are members of the control class you are using. But this example shows that in WPF, sometimes you have to call a static method of some other class to accomplish what you want. Then consider the minimalist documentation (so that you really need to buy a book to learn WPF)... that's undiscoverability at its finest.
Update: Also, WPF is shockingly inefficient, as you may learn if you need to use a non-virtualized ListBox in order to get MVVM to work and the list has more than about 1000 items. Trivial items (short text strings, no DataTemplate) consume about 8 KB of memory per row, or 80 MB for 10000 items, and ListBox takes 4 seconds to handle a single arrow key pressed in a list with this many items.
Subscribe to:
Posts (Atom)