//Sort a set NSSet *set = [[NSSet alloc] initWithArray:@[@(7), @(4), @(5), @(3), @(9), @(15)]]; //Ascending NSArray *sorted = [set sortedArrayUsingDescriptors:@[[[NSSortDescriptor alloc] initWithKey:nil ascending:YES]]]; NSLog(@"%@" , sorted); sorted = [set sortedArrayUsingDescriptors:@[[[NSSortDescriptor alloc] initWithKey:nil ascending:YES selector:@selector(compare:)]]]; NSLog(@"%@" , sorted); sorted = [set sortedArrayUsingDescriptors:@[[[NSSortDescriptor alloc] initWithKey:nil ascending:YES comparator:^NSComparisonResult(id _Nonnull obj1, id _Nonnull obj2) { return [obj1 compare:obj2]; }]]]; NSLog(@"%@" , sorted); //Descending sorted = [set sortedArrayUsingDescriptors:@[[[NSSortDescriptor alloc] initWithKey:nil ascending:NO]]]; NSLog(@"%@" , sorted); sorted = [set sortedArrayUsingDescriptors:@[[[[NSSortDescriptor alloc] initWithKey:nil ascending:YES comparator:^NSComparisonResult(id _Nonnull obj1, id _Nonnull obj2) { return [obj1 compare:obj2]; }] reversedSortDescriptor]]]; NSLog(@"%@" , sorted); sorted = [set sortedArrayUsingDescriptors:@[[[NSSortDescriptor alloc] initWithKey:nil ascending:YES comparator:^NSComparisonResult(id _Nonnull obj1, id _Nonnull obj2) { return [obj2 compare:obj1]; }]]]; NSLog(@"%@" , sorted);
Category: Notes to Self
Four ways for making an NSPredicate
I learn by examples. I find that it connects things in my brain that abstract description does not.
NSSet *set = [[NSSet alloc] initWithArray:@[@(3), @(4), @(5), @(7), @(9), @(15)]]; NSLog(@"%@", [set filteredSetUsingPredicate:[NSPredicate predicateWithFormat:@"SELF == %@", @(3)]]); NSLog(@"%@", [set filteredSetUsingPredicate:[NSPredicate predicateWithFormat:@"SELF == %@" argumentArray:@[@(3)]]]); NSLog(@"%@", [set filteredSetUsingPredicate:[NSPredicate predicateWithBlock:^BOOL(id _Nullable evaluatedObject, NSDictionary<NSString *,id> * _Nullable bindings) { return [(NSNumber *)evaluatedObject isEqualToNumber:@(3)]; }]]); NSLog(@"%@", [set filteredSetUsingPredicate:[[NSPredicate predicateWithFormat:@"SELF == $number"] predicateWithSubstitutionVariables:@{@"number" : @(3)}]]);
How iOS handles touches. Responder chain, touch event handling, gesture recognizers, scrollviews
I started out trying to make a scrollview with buttons in it. I ended up diving deep into the pool of touch events and gesture recognizers. Here’s what I found.
The problem with putting a UIbutton into a scrollview is that when you tap the button, there’s a noticeable delay before the button is highlighted. Not satisfied with this, I started Googling around. Found a stack overflow and it all worked, but how does it work?
I needed to go back to the start.
What happens after you touch an iPhone screen?
- How does a touch find the object that should respond to it?
- How does the view handle the touch without gesture recognizers?
- How do views use gesture recognizers?
- How do gesture recognizes interact with each other?
How does a touch find the object that should respond to it?
Basically, the window takes the touch and checks to see which of it’s subviews the touch is in (aka hit testing). The next view checks its subviews and so on until there are no more subviews to check and we’ve found the deepest view in the view tree that contains the touch. It’s helpful that views are responders, so they can handle the touch events.
If that view isn’t able to handle the touch, then it would be forwarded back up the responder chain to it’s superview until it finds a responder that can handle the touch.
How does the view handle the touch without gesture recognizers?
Touches have four phases: began, moved, ended and cancelled. When the touch changes to a phase, it calls the corresponding method on the view that it’s in.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event; - (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event;
What happens if you start touching in viewA and then move out of that view without lifting your finger. Does the touch call the method on viewA or viewA’s superview?
Turns out that the messages are sent to the object’s where the touch began.
Within these four methods, you get all the touches involved and you can use these touches to define gestures and trigger actions. For example, if you want your view to recognize swipe gestures, you can keep track of where the swipe action starts in touchesBegan:withEvent: and then keep track of how far the touch has moved in touchesMoved:withEvent: and once it’s greater than a certain distance, you can call an action. That’s exactly what people did before gesture recognizers were introduced in iOS 3.2.
How do views use gesture recognizers?
Gesture recognizers (GRs) are awesome because makes takes touch event handling to a higher level. Instead of tracking individual touches, a gesture recognizer just tells you if you should act on a user’s touches.
Also, if you need to recognize a new type of gesture, you don’t need to make a new subclass of UIView. You can just make a new subclass of GR and add it to a vanilla UIView.
After the gesture is recognized, the GR calls an action on a target and that’s where your app reacts to the touch.
Gesture recognizers get first dibs. Touch events go to them before being handled by the view. That means, when a gesture recognizer recognizes a touch, it can prevent the delivery of touchesEnded:withEvent: to the view and instead call touchesCancelled:withEvent:.
By default, during a swipe gesture, the touchesBegan: and touchesMoved: events will still be called while the gesture hasn’t been recognized yet. If you don’t want these events to be called, you can set delaysTouchesBegan to true on the gesture recognizer.
There are awesome diagrams in the Apple docs for gesture recognizers (especially, Fig. 1-3, 1-5, 1-6).
How do gesture recognizers interact with other gesture recognizers?
By default, views with more than one GR can call the GRs in a different order each time.
You can define the order of GRs with
You can prevent touches going to GRs by setting their delegates with
How did this help solve my problem?
The original problem was that a button in a scrollview had a delay in response to touches. The reason is that scrollviews have a property called delaysContentTouches set to YES by default.
Turn delaysContentTouches to NO and you can tap buttons quickly now, but you can’t scroll anymore. That’s because “default control actions prevent overlapping gesture recognizer behavior”. The default control action is tapping a button. The gesture recognizer it was preventing was the panGestureRecognizer on the scrollview.
Meaning that when you tap a button in a scrollview, the tapping of the button takes precedence and you won’t be able to scroll because all touches starting in a button would be interpreted as taps.
In addition, I had to create a subclass of UIScrollview and overwrite touchesShouldCancelInContentView: to YES, so that not all touches are no longer sent the button and the scrollview is able to pan.
TL;DR
- User touches screen (ex. drags scrollview).
- Screen finds the view the user touched (i.e. scrollview).
- The view uses any gesture recognizers to process the touch, otherwise handles the touch itself (i.e. scrollview’s panGestureRecognizer).
- Gesture recognizers or the view trigger actions that change what appears on screen (i.e. scrollview moves) or changes the model.
ASCII, Unicode, UTF-8, UTF-16, Latin-1 and why they matter
In iOS, you have a string “hello world”. Most of the time you just need to assign it to a textLabel.text, uppercaseString it or stringByAppendingString it to another string.
If you stop to look under the hood of an NSString, and you have a deep complicated world with a rich history with tales of international conflict, competing standards and a conservation movement.
Why does this matter?
If you ever wondered imported a weird Word document before and seen a page with boxes and ? instead of accented letters, it’s because of encoding.
If you want your website to show up in other languages, you need to understand encoding.
Under the hood of NSString (in Objective-c)
If you want the first letter of an NSString, you can use characterAtIndex:
NSString *exampleString = "Hello world"
unichar c = [exampleString characterAtIndex:0];
You get this unichar.
What’s a unichar?
typedef unsigned short unichar;
It’s a number. For example, c = “H”, 72 represents “H” when your computer uses UTF-16 to translate between numbers and symbol.
What is UTF-16?
Unicode Transformation Format 16-bit
UTF-16 translates between a number and the Unicode symbol that it represents. Each of it’s code units are 16 bits. Some characters use one code unit, some need two.
Example Unicode chart (number under symbols are in base 16, so H = 0048, which is 72).
What is Unicode?
Computers don’t know what an “H” is. We need to tell it how to draw an “H”, so we use numbers to represent the H.
ASCII was an early way of translating between numbers and symbols, but it really basic. It could only represent 127 symbols and you only get numbers and letters and some punctuation. You only needed 7-bits to represent any ASCII symbol.
What happens when you don’t speak English?
ASCII doesn’t let you make accented letters in Spanish and French. So, if you’re trying to read French, it won’t have the accents or won’t show those letter at all. Forget about trying to reach Chinese or Russian because they have completely different characters and ASCII has no idea how to show those characters.
Obviously, people in France and Russia and China wanted their internet to show their native language, so they made systems for translating numbers to symbols too. The hard part was that many of these systems overlapped and only held one or a subset of language. Latin-1 was one of these encoding that had incomplete coverage.
How do we solve this problem?
We need a new system that has all the English characters, all the accents and all the Russian and Chinese characters too and every other possible symbol like emojis. That’s Unicode.
Check out all the code charts.
Does this actually work? Let’s find out
I looked at the Arabic code chart. So what I’m saying is that if you put in the number under any of these Arabic characters into a NSString, then it’ll show up on my screen?
Well yeah. Let’s try it.
Here are three random Arabic characters. Took the hexadecimal under the characters and converted them into numbers (1692, 1693, 1695) and then put them into an NSString with spaces in between on a Swift Playground.
Yay it works! (Arabic is read right to left.) 😎
What’s UTF-8?
Unicode Transformation Format 8-bit. Each code unit is only 8-bits instead of 16. It means that each code unit can only be one of 256 characters (2^8) instead of the 65,536 characters (2^16) you could potentially have with 16 bits. So is less more here?
In English, most of the characters that we use are in the range of 65 (0041 hexadecimal) to 122 (007A hexadecimal). The first 8 bits are always 00, so some people thought it would be a good idea to get rid of them to save space.
In UTF-16, storing “H” in memory requires one 16-bit unit.
In UTF-8, storing “H” requires one 8-bit unit. Your English characters take half as much space.
But what if I need to show our Arabic character above?
You just need more 8-bit units to represent them. In this case, two will do.
It’s really nice to be able to assume that one code unit is one character, but if you make the code unit too small, it means that you need to have more units.
The tradeoff between UTF-8 and UTF-16 is between having code units that are big enough to contain all or most of the characters you need and conserving space.
There’s also UTF-32, where each code unit is 32 bits. You can make any character with one code unit, but for most of your characters you’ll be using up a lot of useless space.
Right now UTF-8 is the de-facto standard on the web at 87.8% of web sites.
What is character encoding all about?
The story of Unicode is a success story of how people came together when faced with a difficult problem of incompatible systems and actually made one system that works for everyone.
This story also shows how connected the world is now that we need to be able to talk to each other in other countries and how the opportunities of the web are accessible to anyone with an internet connection.
Further reading:
Inspiration for biting the bullet and actually figuring this stuff out.
An excellent explanation of how UTF-8 works.
Installing Nokogiri
Nokogiri is notorious for being annoying to install because it has a couple dependencies. In 1.6, they tried to add the dependencies to the package so they it would just work. I was out of luck on that.
I found a solution that did work.
gem install nokogiri -v '1.6.8.1' -- --use-system-libraries=true --with-xml2-include=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/include/libxml2
It uses the system libraries and specifies where to find libxml2.
We could be doing more with Human Intelligence
Computer scientists are working very hard to create an artificial intelligence that solve problems in many domains (general intelligence). Turns out there’s already something with general intelligence, humans.
I am reminded of how grateful we should be about the human intelligence we take for granted and the gap between what we are doing with this intelligence and what we are capable of achieving.
Studying AI, we can take its example for improving its intelligence by trying to improve our own. One way I’m doing that is by learning how machine learning works, so I can do more with the tools at my disposal. Another way we can do this is by looking at what may be holding ourselves back from achieving our goals and making plans to change that part about ourselves, the same way that any halfway decent general AI would, given the chance.
The warnings about general AI sounds kind of familiar. A group of intelligent organisms shape their world to make it suit their every need despite all other organisms. Yes, it sounds like humans. Need wood? Get it from rainforests. Need power? Burn some fossil fuels. Need feelings of love? Make a website just for cat videos. The only difference may be that our human conscience feels bad about the damage that we’ve done to the planet and strive to repair it. Computers don’t have this conscience yet, but they’re going to need that if we’re going to survive with them.
We can take the analogy of a stamp collecting machine and imagine how we would do that. Then, being human, apply our conscience when thinking about what our “stamp” would be and collect that, be it money, power or good in the world. Time’s ticking. Better get on it.
Links:
How do New Yorkers spend their money?
I was curious how New Yorkers spent their money. Turns out Bureau of Labor Statistics has that data.
How do New Yorkers spend their money?

Mostly housing (not surprising), transportation, food, and retirement savings.
How do Americans spend their time?

Sleeping (personal care) is most of it. Then, we spent time on leisure and sports, work, and household activities (cooking and cleaning). This data includes retired people too, so that’s why the work hours are lower than expected. Amazingly, about 3 hours a day is still spent on TV.
What does this say about society and technology?
Top Areas of Money Spent
- Housing
- Transportation
- Food
- Retirment Savings
Top Areas of Time Used
- Sleeping
- Leisure and sports
- Work
- Household activities
First, it’s interesting that the things we spend all our time on are not the things that we spend our money on. It seems like we actually spend fairly little money on things that we choose and mostly spend money on things that we have to.
Second, it feels like the biggest changes have come in the areas where we spend our time (communications, entertainment) and not so much the areas where we spend our money. Althought it seems like there are some developing technologies which will change this, most notably self driving cars.
I wonder if I feel that certain industries are not seeing progress because there hasn’t been or because I don’t know enough about all of these industries to judge. May warrant some further investigation.
Life is distractions
As I blowing our the candles on my birthday cake, one of my friends surprised me with a question: what was the most important thing I had learned this year?
The idea of distractions had been rolling around in my head.
Why am I thinking about distractions?
I felt that I was spending too much time checking, reading, and watching the news that it filled up the time when I wasn’t working or with my friends. My particular poisons were podcasts, Youtube, and online news.
Two podcasts really nailed this feeling. I was not alone. This feeling was manufactured.
- Note to Self did a week-long bootcamp called Infomagical, where the goal was to be more deliberate about the information that we consume. Each day of the week had a challenge to try out techniques for combating this distraction.
- On the Ezra Klein Show, Andrew Sullivan talked about his 10 day meditation after quitting blogging ( 10:00, 14:50). Most poignant was when he said that there’s “an entire economy built on [this distraction]” and asks: what are we distracting ourselves from?
Infomagical recognizes the feeling of being overwhelmed. More content is generated than ever and we drown in the tsunami unless we’re able to actively choose what we read and watch. This series really hit on some of the feelings that I experienced: being overwhelmed with news, clicking on link after another, reading everything on the front page, reading but not talking about the news, and allowing the avalanche of news to consume my time and attention.
Andrew Sullivan knows that this content is generated on purpose. Media outlets fight for our attention so that they can sell this attention to advertisers to either support their journalism or to just make money. It’s not just news, it’s Youtube, HBO, podcasts, Instagram. My favorite example of this comes from The Next Web. When you finish reading an article, you’re greeted with the message “Shh. Here’s some distraction” implying that they produce distractions but also feed it to you over your objections.

I’m not saying that all distractions are bad or that all distraction is sinister. Quartz and Vox tell me about things that I care about. CrashCourse reminds me that reading literature has something to offer about the human experience. Giving up my Sunday night to HBO connects me to other people. Listening to the Ezra Klein Show made me think about the ethics of our food. Most of these things entertain me. It’s when we’re not able to control the flow of information into our brains (addicted) that this becomes worrisome.
Why is it a problem if we’re distracted and entertained?
Because when you fill up all your time with distractions, they’re probably masking other problems. It becomes easier to find distraction than to face the problems that you’re trying to avoid.
- How should I make a living?
- What does it mean to love someone?
- Why should I have children?
- Should I relearn my native language?
- How do I maintain my family’s culture?
- Which friendships do I value and which of those am I neglecting?
- What should I be doing about the homeless people in our cities?
- How do I make society more fair?
- Should I be eating animals?
- Why don’t we treat our veterans the way that they deserve?
- Are we building Cylons who will destroy us?
- How do I make my life matter?
This piece is what I wished I had said at my birthday party. What I ended up saying was something like this: “I’ve been thinking a lot about distractions a lot recently. A lot of life is distractions, but when you remove all those distractions you get to the real questions that you’ve been avoiding. That’s the stuff that really moves you forward in your life. So I’m trying to be more mindful about how much time I spend on the internet”.
What can we do about it?
Infomagical offers some suggestions. Their challenges are meant to help you take control of your information intake.
I came across Bullet Journal recently too. I often find myself in the situation when there is nothing on my mind, like when I’ve just woken up, and my first instinct is to check my phone. The phone’s ability to grab you before you have found your focus is the biggest challenge of focus. When my task management system is on the internet, it’s easy to avoid doing that task by popping open another tab and wander off into the internet. Having an offline task manager means that you’re thinking about what you have to do before you have a chance to take you off on a journey.
Now, get off the internet and talk to someone about this.
(Space left intentionally blank for you to think before clicking on to the next thing.)
More about this idea
While writing this, I thought about some further reading and watching related to this idea. I left them until the end so that it would be less distracting.
Ezra Klein interview with Andrew Sullivan
Crash Course Philosophy Existentialism
Infomagical Challenge 4 – Talk to A Friend About What You Read for 7 Minutes
End of Season 1 Hiatus
I’ve been blogging weekly for about 4 months now. It’s time to take a break, especially since I want to focus on my new job at a startup. When the app’s released, I’ll point to it. In the meanwhile, I might have a post here or there about my experiences. Happy coding.
Playing with iOS MapKit Part 4: Race Conditions, Other Bugs, Wrap-up
Playing With MapKit
Part 2 Reverse Geocoding and Custom Annotation Callouts
Part 4 Race Conditions, Other Bugs, and Wrap-up
As the functionality and the look was coming into place, I took note of bugs that I saw.
The most insidious bugs are ones that don’t happen all the time or under conditions (ex. when internet is especially slow or the computer is especially fast). One example is the race condition.
What is a Race Condition?
A race condition is a situation when the sequence of asynchronous responses results in an undesired effect.
My Example


Very quickly after starting the app, I was tapping the current location dot and I was seeing “Current Location” in the bubble. It should have said the address that the address was closest to. If I waited a couple seconds to tap the current location, it would display the address as expected.
This led me to believe that I had a race condition.
Under normal conditions:
- The map updates the user location calling its delegate mapView:didUpdateUserLocation: which starts the reverseGeocoding call based on the user’s location.
- The reverseGeocoding response is received and the completion block sets the annotation’s title with the address of the user location.
- When the user location dot is tapped, it will display a custom bubble using the annotation’s title in the label.
Usually the internet is fast enough that the reverseGeocoding response returns before the user location dot is tapped.
The two things that are racing are
- the reverseGeocoding network call and
- the tap of the current location dot.
If 1 finished first, then the app would work as intended. If 2 finished first, the experience would be suboptimal.
Design Choice
One way to fix this is to update the label when the reverseGeocoding network call came back.
At the end of the completion block, I added a method call to updatePinCallout, which pretends that the user location dot was tapped again and reloads the address.
- (void)updatePinCallout { if (self.userLocationPin.selected) { [[NSOperationQueue mainQueue] addOperationWithBlock:^{ [self.userLocationPin setSelected:YES animated:YES]; }]; } }
Just remember to use the mainQueue to retap the dot or else the address won’t update on screen.
This change also fixed another bug that happened when the user location would jump when WIFI was enabled and the address label would not update to the new address.
Alternative Choice
Another thing that I found solved the problem pretty well was to zoom the map to the user location immediately, so that the animation didn’t even give a chance to tap the user location dot.
Other Bugs
Writing code for multiple scenarios unnecessarily
When I first implemented zooming to fit destinations, I only zoomed when the location was outside of the visible region. I realized that sometimes this would not be sufficient because sometimes the map would be zoomed way out, so I decided to account for that scenario too. I would zoom in if the destination was in the original visible area. This got complicated quickly.
Eventually, I realized that it was too much and just made it always zoom to the destination. The code went from 14 lines to 1.
Testing Days Later
I thought I was done with the project. Wrong. A few days later, I was trying out the app on the bus and came across two bugs.


- I was on a bus and the location was updating very quickly and constantly centering the map on the new location. The problem was that I wanted to look around the map and couldn’t do that without it changing while I was using it.
- This is easily fixed by only zooming on the time the map loads.
- I was looking for a Pret A Manger. When I saw the results page, I found it utterly useless because all of them said the same thing and I couldn’t tell which location was where.
- I think it would have been cool to add some directions as to how far each location was from my current location and in which direction or add the cross streets.
As the Pragmatic Programmer said:
“Test your software, or your users will.”
Where to next?
There’s certainly a lot of ways this can go. Add friends, add Facebook integration, add Yelp results, use Google Maps. That would be fun to do, but would lack direction.
For a product to be real, it must have a real use.
The way that this app fits into my life has something to do with lunch (ergo the name LunchTime). In an effort to build good habits, one of them is to walk 10K steps a day. As I’ve discovered, this means walking 1.3 hours a day. If I don’t plan to walk, I won’t get near 10K.
So over lunch, I like to walk 10 to 15 minutes away and back. I think it would be cool to draw a diamond around your current location to see how far you can walk in 5 and 15 minutes and show how far each location is.
Options abound. Looking forward to the next expedition.
It’s been fun working with MapKit. Time to play around with something else. Who knows? I might be looking at WatchKit, EventKit, or something else entirely.