Fun With Function Pointers

One fun aspect of working in a variety of languages is being exposed to a variety of approaches, as well as syntaxes, to passing functions around. It’s syntactically easier with the increasing acceptance of using lambda expression syntax across more classically imperative languages. Passing closures around, almost as though they were first class objects (perish the thought!), can lead to a lot of questions though from all sides, particularly folks not familiar with more functional programming approaches. In particular, closures bring up interesting questions as they capture scope. In this post, I’m going to address some of the most common questions, and pitfalls, I’ve found. By no means is it comprehensive.

Oh, I’m also going to be focusing on C-based derivatives (C, C++, and Objective-C) with a notable bias towards Mac technologies.

Function Pointers

Function pointers are the oldest and least technically complicated way to pass around functions. It works something like this:

1
2
3
4
void doSomethingAndCalMeBack(void (*callback)(int,int))
{
  callback(2,2);
}

In this case, you have a function that takes as a parameter a function pointer. This technique works in C, C++, and Objective-C, and while a bit heavy-weight syntactically (usually covered up by typedefs) gets the job done.

The benefits of this approach are pretty clear:

  • Separation of concerns, you don’t need specialized functions for everything, you can use callbacks to do the heavy lifting (think non-OOP strategy pattern).
  • Use of functional programming tools like map-filter-fold with a limited scope (lacking closure support)
  • Hot-swappable behavior without recompiling, combined with modular programming permits you to write very dynamic programs - the C standard library includes this ability and accepts function pointers for functions like qsort.

If you’re sticking to pure C, things stay pretty simple, one important thing to note is you should avoid casting function pointers to void * (you should probably not be doing this anyways…) because the compiler may represent function pointers entirely differently than it represents data pointers - so this cast can cause loss of data, even if you cast back appropriately.

Blocks

Thanks to clang, we get another way to handle passing functions around (at least in clang) with the block extensions. These are probably most notable as being used heavily in Objective-C for asynchronous programming, and they are basically lambdas by any other name.

Clang’s blocks are similar to the C++ lambda expressions we’ll be discussing a bit later on, but they capture anything from their surrounding scope that they read or write. Here’s an example:

1
2
3
4
5
int five = 5;

int(^addFive)(int) = ^int(int a) { return five + a; };

printf("addFive(3) = %d", addFive(3));

As you can see here, we declare a block, which is basically a function, and we are able to capture parts of the surrounding scope in it, and then invoke it later. It is worth noting that, as you can see, it captures the elements of the scope around it that it uses. This can have some interesting implications in terms of memory management, which we’ll tackle in a few.

One thing worth mentioning, for reasons I dig into a little more in the “advanced” section below, if you pass blocks around, it’s important that the callee copy the block or captured variables within the block will become invalid in most cases.

Blocks are useful because they provide us an easy way to create anonymous methods and implement functions such as generators. As noted previously, they are used extensively by the Cocoa and other frameworks used on clang and with Objective-C to provide a way to enable asynchronous programming.

Memory In Blocks

As we noted previously, blocks capture anything they reference, which can kind of be problematic when it comes to object lifetimes and memory management. In the case of Objective-C based objects, this isn’t too much of a problem, because we’re generally talking about heap allocated objects and we’re generally using automated reference counting - this doesn’t remove all problems, but does simplify the question of figuring out when a block is finished with an object.

One thing to note regarding blocks is that unless you use the __block storage class specifier to declare a variable before a block captures it, variables captured in a block are copied by value when the block leaves its parent scope, if you use __block it captures by reference and both you and the original scope can make changes that impact each other - these changes are also not guaranteed to be atomic, so be careful with that (it’s actually slightly more complex than this, see section below for more…).

One last thing - just because I said the captured values are copied by value, not reference, you should remember that pointers being copied by values does not mean the object being pointed to is immutable or cannot have messages sent to it: just that you can’t modify what the original pointer you copied from points to.

Am I weak?

One of the most common uses for blocks is asynchronous execution of a completion handler, eg: run this operation that loads data for me, alert me when complete. In this case, generally an object will create the block, and the block will in turn capture self so it can invoke a completion handler on the object that creates it. There is a problem here though - if the block has a strong reference to self, and then a processor object has a reference to the block, and self has a reference to the processor object, you have a retain cycle - and should the processor never exit, all these objects will never be deallocated, and can in turn cause further memory leaks in objects they have strong references to.

Remember, ARC does what you tell it, not what you want.

We get around this by creating a weak version of self to use within the block - this permits the block to retain a reference to self, but it also breaks the retain cycle. Here’s how that looks:

1
2
3
4
5
6
7
8
- (void) sendRequest
{

  __weak weakRef = self;
  [request sendWithBlock:^(int responseCode){
    [weakRef doSomeWork];
  }];
}

But there’s still a problem…

Or am I strong?

In this solution, we still run into issues because any method calls you make on the weak version of self will independently either invoke on the correct memory location, or on the nil pointer, depending on if something has deallocated the object weak self points to. This means one call could succeed, and the next call could fail. Even, potentially, worse this means you could run into a situation where you’re passing nil as a parameter to a method that doesn’t expect it, and you could crash.

One thing to note here that I’ve had regular debates with others about is whether or not when holding a weak reference to an object, that object could be deallocated in the middle of a method call. If you read the ARC specification, under the section on ownership semantics, you’ll see that reading _weak objects causes weak to insert a retain before the read, and a release after the read. This means that when you call a method on a weak pointer, the method will atomically either execute with a valid instance or it will go to nil.

The problem, as noted though, is that this is not the case if you have multiple method calls - each individually has a retain/release before/after the call, but there is no guarantee a dealloc won’t happen should a strong reference on another thread have been removed during a method call.

To solve this, you have a few options;

  • You can add a strong reference and cast the weak self object to a strong self object, then verify that strong self is intact - this is what you need to do if you’re passing self as a parameter of a method as well to verify it is not nil (unless the method you’re calling is fine with nil being passed in).

  • You can create one method on the self object that you invoke, and is guaranteed to succeed or fail, that in turn calls any other methods you need

  • You can understand your object lifecycles well enough to determine if it’s even necessary to use a weak self in this case, and subsequently only worry about retain cycles in cases where they can actually arise

I strongly recommend the third option.

Dig a bit deeper…

As I mentioned earlier, how blocks manage their memory and are allocated and passed around isn’t quite as simple as I explained. It’s true that you can consider __block as marking a variable to be passed by reference, but what it actually does is tell the compiler to store that variable (potentially) on the heap instead of the stack - or that if it is on the stack, to move it to the heap when a referencing block is copied.

When a block resides on the stack, variables captured from any other storage class outside of __block will be copied as though invoking a const copy constructor - so you end up with variables copied by value. For variables in __block you instead have variables copied without the above const rule, so you end up with the effect of a pass by reference. Oh, one other note, if a captured variable’s type does not include a copy constructor, it will throw an error.

I also noted above that you need to copy blocks when you pass them around - when a block “leaves” its parent scope (read: gets outside of its current stack frame) it risks having captured variables destroyed at the end of the enclosing scope, which can corrupt the block. By copying, you in turn cause the block and all of its captured variables to be moved to the heap and only be destroyed when the refcount reaches 0.

C++ Function Pointers

Luckily, as noted, function pointers are also fairly straightforward in C. When you get into C++ however, things get a bit more complicated. In C++, we have the additional concept of member functions which are associated with an instance of an object. This means for non-static functions in a class, we end up with something more like…

1
2
3
4
void doSomethingAndCallback(void (MyClass::*callback)(int,int),const MyClass& instance)
{
  instance->*callback(2,2);
}

Notice how we have to pass in an instance to invoke this method on. This can be problematic and lead to less-than-elegant solutions. You can use templating for some of this, but that can also lead to a lot more problems than maintainable solutions. There’s also some tricks I won’t get into here (but will reference later) that you can use involving templates to break encapsulation in conjunction with function pointers.

Put them together and you get…

When you intermix C++ and C function pointers you can get some fun results. The big issue is the calling conventions for invoking the functions may vary, and that C++ has (compiler specific) name mangling used to implement all the lovely namespaces - this means you need to be extra careful to use your extern “C” correctly throughout. Note I didn’t say be careful about passing member function pointers, because you shouldn’t be passing member functions pointers to C, remember? How would you even cast them to something C understands? There are techniques to avoid this and get around this, but you should generally be reviewing your architecture if you’re in this situation.

Functors and Functionoids

Another way we can pass functions around in C++ is using functors and functionoids. Functors are basically just objects that act as functions by overriding the operator(), functionoids do the same thing, but don’t override operator() and instead have an alternative member function. Why you’d choose one over the other are a bit outside the scope of this document, but I will bring up one important case - because you’re overriding the operator(), you can use functors with templated functions that take “function-like objects” which gives you flexibility of passing in either a function pointer, functor, etc, because they are all called the same way syntactically (though behind the scenes there are important differences). Let’s look at an example!

1
2
3
4
5
6
7
8
9
10
11
12
13
14
struct between
{
  between(int low, int high) : high(high),low(low) { }
  bool operator()(int value) { return value < high && value > low; }
private:
  int low;
  int high;
};

between between_4_and_8(4,8);
if (between_4_and_8(5))
{
  std::cout << "It works!" << std::endl;
}

You can see here, each between instance encapsulates a logical function based on the parameters to the constructor - in a way, the constructor is generating functions for you. This storing of state is critical to what a functor is. This state in between calls can also be used as a safe way, when designed correctly, in multi-threaded applications, something that is traditionally difficult to do in C, and as we’ll see moving forward, a concern with closures as well.

C++ Closures

As I noted in the introduction, closures by many names have been making their way into more imperatively oriented languages as of late, and as such, we are starting to see programmers using more traditionally functional programming techniques (map, filter, fold comes to mind) in “traditionally” non-functional languages such as C++. The way C++ has dealt with closures is flexible, and it fits into the template functionality of C++ in such a way that it becomes relatively easy to template a function so it can accept a functor, function pointer, or closure.

But let’s not put the cart before the proverbial horse (only the horse is proverbial, the cart is real). Let’s look at an example.

1
2
3
4
5
int low = 5;
int high = 8;
auto between_4_and_8 = [=](int val) -> int {
  return val < high && val > low;
};

So in this example you can see that the closure is not only defining a function, but is also capturing the state around it. We’ll return to this state capture in a minute, but before going on, I’d like to bring us back to functors for a second.

State: Captured or Created?

It’s important to understand, a functor is actually an object which then has a function, and lambdas are (effectively) implemented in the same way - as a small class that overrides the function call operator. This similiarity being mentioned, it is worthwhile to note that how they keep state is slightly different. A functor explicitly declares its own state in its class definition, whereas a lambdas state is captured from the enclosing scope and either brought into the implicit class definition as a reference or copied by value - depending on how you specify capture should work.

Does this mean you can’t keep state in between executions with a lambda expression? No, but it does mean you have to explicitly declare a variable outside of the lambda expression for this purpose. This means that if you find yourself keeping track of complex state in a lambda expression and polluting the surrounding scope with variables that are unused, it may be worth considering using a functor instead.

Capture: Honey or Vinegar?

One aspect of closures that make them so powerful is that they are able to take the scope around them - the same way functors have local state. C++ permits you to specify several options for how to perform capture - including capturing by reference and by value, only referenced variables, all variables, or explicit variables.

One thing to note about this capture is that if you capture by reference, should you be running in a multithreaded environment, the values you’re using could be spontaneously changed and accesses to values are not guaranteed to be atomic - just like any other access in C/C++. When doing asynchronous programming, this can be an often overlooked problem.

Another thing we run into here is capturing stack allocated variables that can become invalid. For instance, if a lambda references a locally stack allocated integer within the function it is defined, when that function is popped off the stack, the reference to that variable is no longer valid. Avoid doing this, or if you have to capture things from a scope that is about to be invalid, put them on the heap instead.

STL Stuff

In addition to giving us beautiful, magnificent lambdas, C++ also has given us a number of structures in the standard template library to help us encapsulate the concepts of functions, function pointers, functors, functionoids, and lambdas. I’m going to go over a few of these briefly, but not in too much depth - some of these get into areas I’m not too interested in getting into in this article, but may in another, such as how some of the implicit conversions between functions and pointers work.

std::function

From a high level, an std::function is anything that could be invoked - this includes pointers to functions, Callable objects, lambda expressions, member function pointers, functors, functionoids, etc. The value in this object is that it is a very high level representation of a function and present a polymorphic way of handling functions that is consistent, making it easier to handle passing these around as arbitrary parameters. Combined with bind, you also get some very powerful magic we’ll get to in a minute.

std::bind

Bind does exactly what it says it does - given a Callable object, you then specify where each parameter of this function will come from, and it generates a forwarding wrapper. In other words, you can take a Callable (function pointers, lambda expression, functor, etc) and then get an std::function that takes fewer parameters, or re-maps parameters. Let’s look at a simple example:

1
2
3
4
5
6
7
8
9
int add(int a, int b)
{
  return a+b;
}

std::function generateAddFunction(int toAdd)
{
  std::bind(add,_1,toAdd);
}

In this example, the generateAddFunction returns a function that accepts an integer, and adds whatever you have specified to it.

This is incredibly powerful, and can permit you to create new functions simply by building on those already existing and augmenting them. This also permits you to simply and effectively provide alternative API surfaces (adapters, facades, etc) in a very lightweight way.

std::mem_fun

mem_fun is a way to wrap the ugly member function pointer syntax we discussed earlier into an object. Once wrapped in this way, it’s a lot easier to deal with member function pointers, and they can be passed around to anything that takes a function object.

One big happy family

Now that we’ve talked about how C++ and Objective-C do lambdas, it’s worth noting that clang supports equivalency between the two. This is great news, because it means something expecting an std::function can accept a block, and you can use things like std::bind on blocks as well. This can make working across language boundaries easier when working with libraries written in C++.

References

Below are the references I used while writing this article, I strongly recommend checking them out if you want additional details.

  • https://isocpp.org/wiki/faq/pointers-to-members
  • http://clang.llvm.org/docs/BlockLanguageSpec.html
  • http://clang.llvm.org/docs/AutomaticReferenceCounting.html
  • http://clang.llvm.org/docs/LanguageExtensions.html
  • http://clang.llvm.org/docs/BlockLanguageSpec.html
  • http://c-faq.com/ptrs/
  • http://en.cppreference.com/w/cpp/language/lambda
  • http://en.cppreference.com/w/cpp/utility/functional/function
  • http://en.cppreference.com/w/cpp/utility/functional/bind
  • http://en.cppreference.com/w/cpp/utility/functional/mem_fun
Nov 11th, 2014
Follow Me on Twitter

Comments