Sorting an NSSet

    //Sort a set

    NSSet *set = [[NSSet alloc] initWithArray:@[@(7), @(4), @(5), @(3), @(9), @(15)]];

    //Ascending

    NSArray *sorted = [set sortedArrayUsingDescriptors:@[[[NSSortDescriptor alloc] initWithKey:nil ascending:YES]]];

    NSLog(@"%@" , sorted);

    sorted = [set sortedArrayUsingDescriptors:@[[[NSSortDescriptor alloc] initWithKey:nil ascending:YES selector:@selector(compare:)]]];

    NSLog(@"%@" , sorted);

    sorted = [set sortedArrayUsingDescriptors:@[[[NSSortDescriptor alloc] initWithKey:nil ascending:YES comparator:^NSComparisonResult(id  _Nonnull obj1, id  _Nonnull obj2) {

        return [obj1 compare:obj2];

    }]]];

    NSLog(@"%@" , sorted);



    //Descending

    sorted = [set sortedArrayUsingDescriptors:@[[[NSSortDescriptor alloc] initWithKey:nil ascending:NO]]];

    NSLog(@"%@" , sorted);

    sorted = [set sortedArrayUsingDescriptors:@[[[[NSSortDescriptor alloc] initWithKey:nil ascending:YES comparator:^NSComparisonResult(id  _Nonnull obj1, id  _Nonnull obj2) {

        return [obj1 compare:obj2];

    }] reversedSortDescriptor]]];

    NSLog(@"%@" , sorted);

    sorted = [set sortedArrayUsingDescriptors:@[[[NSSortDescriptor alloc] initWithKey:nil ascending:YES comparator:^NSComparisonResult(id  _Nonnull obj1, id  _Nonnull obj2) {

        return [obj2 compare:obj1];

    }]]];

    NSLog(@"%@" , sorted);

Four ways for making an NSPredicate

I learn by examples. I find that it connects things in my brain that abstract description does not.

   NSSet *set = [[NSSet alloc] initWithArray:@[@(3), @(4), @(5), @(7), @(9), @(15)]];

    NSLog(@"%@", [set filteredSetUsingPredicate:[NSPredicate predicateWithFormat:@"SELF == %@", @(3)]]);

    NSLog(@"%@", [set filteredSetUsingPredicate:[NSPredicate predicateWithFormat:@"SELF == %@" argumentArray:@[@(3)]]]);

    NSLog(@"%@", [set filteredSetUsingPredicate:[NSPredicate predicateWithBlock:^BOOL(id  _Nullable evaluatedObject, NSDictionary<NSString *,id> * _Nullable bindings) {

        return [(NSNumber *)evaluatedObject isEqualToNumber:@(3)];

    }]]);

    NSLog(@"%@", [set filteredSetUsingPredicate:[[NSPredicate predicateWithFormat:@"SELF == $number"] predicateWithSubstitutionVariables:@{@"number" : @(3)}]]);

 

 

How iOS handles touches. Responder chain, touch event handling, gesture recognizers, scrollviews

I started out trying to make a scrollview with buttons in it. I ended up diving deep into the pool of touch events and gesture recognizers. Here’s what I found.

The problem with putting a UIbutton into a scrollview is that when you tap the button, there’s a noticeable delay before the button is highlighted. Not satisfied with this, I started Googling around. Found a stack overflow and it all worked, but how does it work?

I needed to go back to the start.

What happens after you touch an iPhone screen?

  • How does a touch find the object that should respond to it?
  • How does the view handle the touch without gesture recognizers?
  • How do views use gesture recognizers?
    • How do gesture recognizes interact with each other?

 

How does a touch find the object that should respond to it?

The responder chain.

Basically, the window takes the touch and checks to see which of it’s subviews the touch is in (aka hit testing). The next view checks its subviews and so on until there are no more subviews to check and we’ve found the deepest view in the view tree that contains the touch. It’s helpful that views are responders, so they can handle the touch events.

If that view isn’t able to handle the touch, then it would be forwarded back up the responder chain to it’s superview until it finds a responder that can handle the touch.

How does the view handle the touch without gesture recognizers?

Touches have four phases: began, moved, ended and cancelled. When the touch changes to a phase, it calls the corresponding method on the view that it’s in.

- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event;
- (void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event;

 

What happens if you start touching in viewA and then move out of that view without lifting your finger. Does the touch call the method on viewA or viewA’s superview?

Turns out that the messages are sent to the object’s where the touch began.

Within these four methods, you get all the touches involved and you can use these touches to define gestures and trigger actions. For example, if you want your view to recognize swipe gestures, you can keep track of where the swipe action starts in touchesBegan:withEvent: and then keep track of how far the touch has moved in touchesMoved:withEvent: and once it’s greater than a certain distance, you can call an action. That’s exactly what people did before gesture recognizers were introduced in iOS 3.2.

Code examples from Apple

How do views use gesture recognizers?

Gesture recognizers (GRs) are awesome because makes takes touch event handling to a higher level. Instead of tracking individual touches, a gesture recognizer just tells you if you should act on a user’s touches.

Also, if you need to recognize a new type of gesture, you don’t need to make a new subclass of UIView. You can just make a new subclass of GR and add it to a vanilla UIView.

After the gesture is recognized, the GR calls an action on a target and that’s where your app reacts to the touch.

Gesture recognizers get first dibs. Touch events go to them before being handled by the view. That means, when a gesture recognizer recognizes a touch, it can prevent the delivery of touchesEnded:withEvent: to the view and instead call touchesCancelled:withEvent:.

By default, during a swipe gesture, the touchesBegan: and touchesMoved: events will still be called while the gesture hasn’t been recognized yet. If you don’t want these events to be called, you can set delaysTouchesBegan to true on the gesture recognizer.

There are awesome diagrams in the Apple docs for gesture recognizers (especially, Fig. 1-3, 1-5, 1-6).

How do gesture recognizers interact with other gesture recognizers?

By default, views with more than one GR can call the GRs in a different order each time.

You can define the order of GRs with

You can prevent touches going to GRs by setting their delegates with

More interactions between GRs

How did this help solve my problem?

The original problem was that a button in a scrollview had a delay in response to touches. The reason is that scrollviews have a property called delaysContentTouches set to YES by default.

Turn delaysContentTouches to NO and you can tap buttons quickly now, but you can’t scroll anymore. That’s because “default control actions prevent overlapping gesture recognizer behavior”. The default control action is tapping a button. The gesture recognizer it was preventing was the panGestureRecognizer on the scrollview.

Meaning that when you tap a button in a scrollview, the tapping of the button takes precedence and you won’t be able to scroll because all touches starting in a button would be interpreted as taps.

In addition, I had to create a subclass of UIScrollview and overwrite touchesShouldCancelInContentView: to YES, so that not all touches are no longer sent the button and the scrollview is able to pan.

 

TL;DR

  1. User touches screen (ex. drags scrollview).
  2. Screen finds the view the user touched (i.e. scrollview).
  3. The view uses any gesture recognizers to process the touch, otherwise handles the touch itself (i.e. scrollview’s panGestureRecognizer).
  4. Gesture recognizers or the view trigger actions that change what appears on screen (i.e. scrollview moves) or changes the model.

ASCII, Unicode, UTF-8, UTF-16, Latin-1 and why they matter

In iOS, you have a string “hello world”. Most of the time you just need to assign it to a textLabel.text, uppercaseString it or stringByAppendingString it to another string.

If you stop to look under the hood of an NSString, and you have a deep complicated world with a rich history  with tales of international conflict, competing standards and a conservation movement.

Why does this matter?

If you ever wondered imported a weird Word document before and seen a page with boxes and ?  instead of accented letters, it’s because of encoding.

If you want your website to show up in other languages, you need to understand encoding.

 

Under the hood of NSString (in Objective-c)

If you want the first letter of an NSString, you can use characterAtIndex:

NSString *exampleString = "Hello world"
unichar c = [exampleString characterAtIndex:0];

You get this unichar.

What’s a unichar?

typedef unsigned short unichar;

It’s a number. For example, c =  “H”, 72 represents “H” when your computer uses UTF-16 to translate between numbers and symbol.

What is UTF-16?

Unicode Transformation Format 16-bit

UTF-16 translates between a number and the Unicode symbol that it represents. Each of it’s code units are 16 bits. Some characters use one code unit, some need two.

Example Unicode chart (number under symbols are in base 16, so H = 0048, which is 72).

What is Unicode?

Computers don’t know what an “H” is. We need to tell it how to draw an “H”, so we use numbers to represent the H.

ASCII was an early way of translating between numbers and symbols, but it really basic. It could only represent 127 symbols and you only get numbers and letters and some punctuation. You only needed 7-bits to represent any ASCII symbol.

What happens when you don’t speak English?

ASCII doesn’t let you make accented letters in Spanish and French. So, if you’re trying to read French, it won’t have the accents or won’t show those letter at all. Forget about trying to reach Chinese or Russian because they have completely different characters and ASCII has no idea how to show those characters.

Obviously, people in France and Russia and China wanted their internet to show their native language, so they made systems for translating numbers to symbols too. The hard part was that many of these systems overlapped and only held one or a subset of language. Latin-1 was one of these encoding that had incomplete coverage.

How do we solve this problem?

We need a new system that has all the English characters, all the accents and all the Russian and Chinese characters too and every other possible symbol like emojis. That’s Unicode.

Check out all the code charts.

Does this actually work? Let’s find out

I looked at the Arabic code chart. So what I’m saying is that if you put in the number under any of these Arabic characters into a NSString, then it’ll show up on my screen?

Well yeah. Let’s try it.

Here are three random Arabic characters. Took the hexadecimal under the characters and converted them into numbers (1692, 1693, 1695) and then put them into an NSString with spaces in between on a Swift Playground.

Screen Shot 2016-10-20 at 9.58.22 PM.pngScreen Shot 2016-10-20 at 9.58.33 PM.pngScreen Shot 2016-10-20 at 9.58.48 PM.png

Screen Shot 2016-10-20 at 10.02.25 PM.png

Yay it works! (Arabic is read right to left.) 😎

Screen Shot 2016-10-20 at 9.58.56 PM.png

What’s UTF-8?

Unicode Transformation Format 8-bit. Each code unit is only 8-bits instead of 16. It means that each code unit can only be one of 256 characters (2^8) instead of the 65,536 characters (2^16) you could potentially have with 16 bits. So is less more here?

In English, most of the characters that we use are in the range of 65 (0041 hexadecimal) to 122 (007A hexadecimal). The first 8 bits are always 00, so some people thought it would be a good idea to get rid of them to save space.

In UTF-16, storing “H” in memory requires one 16-bit unit.

In UTF-8, storing “H” requires one 8-bit unit. Your English characters take half as much space.

But what if I need to show our Arabic character above?

You just need more 8-bit units to represent them. In this case, two will do.

It’s really nice to be able to assume that one code unit is one character, but if you make the code unit too small, it means that you need to have more units.

The tradeoff between UTF-8 and UTF-16 is between having code units that are big enough to contain all or most of the characters you need and conserving space.

There’s also UTF-32, where each code unit is 32 bits. You can make any character with one code unit, but for most of your characters you’ll be using up a lot of useless space.

Right now UTF-8 is the de-facto standard on the web at 87.8% of web sites.

What is character encoding all about?

The story of Unicode is a success story of how people came together when faced with a difficult problem of incompatible systems and actually made one system that works for everyone.

This story also shows how connected the world is now that we need to be able to talk to each other in other countries and how the opportunities of the web are accessible to anyone with an internet connection.

 

Further reading:

Inspiration for biting the bullet and actually figuring this stuff out. 

An excellent explanation of how UTF-8 works.

 

What are Homebrew, RVM, RubyGems, Bundler?

What is Homebrew?

Homebrew helps you easily install and uninstall programs to your computer (a package manager).

Examples: gcc, postgresql, heroku, gnuplot, redis, rbenv, python3, memcached, octave, git

Installing Homebrew

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

into terminal

Installing an application (formula) with Homebrew

brew install <programName>

brew install octave

Update Homebrew

brew update

Update applications with Homebrew

brew upgrade

More Docs

 

What is RVM?

It allows you to manage the your ruby environment for each project. This usually means which ruby of version you’re using.

This is useful because you might have an old project that you built with ruby 2.2.1 and they made a change in 2.2.2 that breaks your app and you don’t have the time or can’t fix the problem.

Rbenv is another ruby version manager.

Installing RVM

gpg --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
\curl -sSL https://get.rvm.io | bash -s stable

Looking at the versions of Ruby available

rvm list known

Installing a version of Ruby

rvm install <versionName>

rvm install 2.3.1

Changing the version of Ruby you’re using

rvm use <versionName>

rvm use 2.3.1

Changing the default version of Ruby

rvm use 2.3.1 --default

More Docs

 

What is RubyGems?

It’s a package manager for ruby, a programming language. It helps to install, update and uninstall programs that run with ruby which other people have build and you might want to use. RubyGems is called an application level package manager because it’s focused only those programs that run with ruby.

More application level package managers:

  • cocoapods and carthage for iOS development using objective-c and swift
  • npm for node
  • pip for python
  • more

Examples of gems: nokogiri, rspec, bundler, rails, rake

Installing RubyGems

It comes with ruby. If gem -v doesn’t provide a number, then download it here.

Installing a gem

gem install <gemName>

gem install bundler

Updating RubyGems

gem update --system

Updating gems

gem update

More docs

 

What is Bundle?

Bundle manages the gems that you need for each project. Bundle is itself a gem. Instead of having to gem install <gemName>, you can add the list of gems you need and it will gem install of them for you.

If you already have the gem, then it doesn’t need to install it again, but it also knows the dependencies for gems you might want to use and will install those for you automatically.

RVM also has “gemsets” which is very similar.

In your project, you should have a Gemfile in the root directory.

It looks like this:

source 'https://rubygems.org'
gem '<gemName>'
gem '<gemName2>'

More on Gemfiles

Install all the gems in the gemfile

bundle install

Update the version of the gems

bundle update

Update bundler

gem update

More docs

Installing Nokogiri

Nokogiri is notorious for being annoying to install because it has a couple dependencies. In 1.6, they tried to add the dependencies to the package so they it would just work. I was out of luck on that.

I found a solution that did work.

gem install nokogiri -v '1.6.8.1' -- --use-system-libraries=true --with-xml2-include=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.12.sdk/usr/include/libxml2

It uses the system libraries and specifies where to find libxml2.

Race conditions, threads, processes, tasks, queues in iOS

What is a race condition?

When you have two threads changing a variable simultaneously. It’s possible to get unexpected results. Imagine a bank account where one thread is subtracting a value to the total and the other is adding a value.

The order of events goes like this:

  1. Access the total to subtract to it. (total = 500, expense = 40)
  2. Compute the difference. (total =500, newTotal = 460)
  3. Save the new total. (total = 460)
  4. Access the total to add to it. (total = 460, income = 200)
  5. Compute the sum (total = 460, newTotal = 660)
  6. Save the new total (total = 660)

Great! 660 is exactly what we expected, but it could have gone like this:

  1. Access the total to subtract to it. (total = 500, expense = 40)
  2. Access the total to add to it. (total = 500, income = 200)
  3. Compute the difference. (total =500, newTotal = 460)
  4. Save the new total. (total = 460)
  5. Compute the sum (total = 500, newTotal = 700)
  6. Save the new total. (total = 700)

So the account is now at 700. That’s not what we expected. Uh oh.

When should we suspect a race condition?

When variables that have multiple threads operating on them and you’re getting unexpected results some of the time.

How do we fix this?

One option is to lock the variable total so that only one thread could operate on it at a time, the lock is called a mutex.

But in iOS, you want to use grand central dispatch (GCD) or OperationQueues to handle your tasks. So in the example of the bank total, we’d probably just want to use a serial dispatch queue which has sole access to the total variable. By using GCD or OperationQueues, you’re letting Apple do the thread management and removing the code for locking. Less code is good.

So what are threads, processes and tasks?

The task is an abstract concept for describing something you want to do. In the example above, adding money to the total or subtracting money from the total are both tasks. They use a process to actually affect the change in code.

A process keeps track of what needs to be done and delegates the tasks to the threads. A process can have one or multiple threads. The process is like a project manager and the thread is like a worker.

What’s the difference between hardware and software threads?

The threads we’ve been talking about so far have been software threads. They’re (generally) independent units of computation.

The hardware threads are based on the number of cores on the computer. For example, the current 12 inch MacBook has this processor with a 1.1GHz two-core processor. That means it have 2 cores, each with 1.1GHz of clock speed. Each of those core have two 2 hardware threads, so there are 4 hardware threads total.

Each of these hardware threads can run many software threads, depending on how the operating system uses them.

What’s the best Python IDE for beginners?

What I’m looking for:

  1. Good autocomplete tools and maybe debugging
  2. Short term use, therefore low cost
  3. Modern interface

 

A lot of people on Quora say  Pycharm is really good ($89 a year for individual; $199 for organization; monthly plans $9).

As an iOS developer, I was interested to learn that Xcode works with Python too. You just have to do some configuration.

As a beginner with lower requirements, it seems like Komodo Edit (free) or a text editor like Sublime Text ($70) or Textmate (free)  would also work.

 

Order for trying

  1. Komodo Edit
  2. Sublime Text or Textmate
  3. Pycharm
  4. Xcode

 

References:

https://www.quora.com/What-is-the-best-free-Mac-Python-IDE-for-a-beginner

https://www.quora.com/What-is-the-best-IDE-for-Python

http://stackoverflow.com/questions/5276967/python-in-xcode-7

https://www.jetbrains.com/pycharm/buy/#edition=commercial

The Best Algorithm for Learning to Cook

Cook to eat and cook to learn

There are two reasons for cooking: cooking to eat and cooking to learn. The main point of cooking is to eat healthy food, affordably without spending too much time or effort. Once that’s achieved, cooking allows you to learn about what foods work with your tastes and your body. You can also use cooking to learn about your own culture or other cultures.

The method I’m going to describe is inspired by building software which emphasizes reusable pieces.

 

A meal has three interchangeable parts

I tend to eat high fiber diets. That means my meals (lunch and dinner) are about 1/4 carbs, 1/4 protein, and 1/2 vegetables by volume with plenty of oils. My snacks tend to be fruit and nuts. Breakfast is plain whole yogurt and bananas.

Your meals depend on what works for your body, but using these guideliens this is a reasonable starting point for a healthy diet. Americans tend to eat too much sugar, meat, and bread and not enough vegetables, so that’s why I err on the side of too much vegetables.

We want to think of these three parts (carbs, protein, vegetables) in a meal as parts that can be swapped out, like a three-person play that can have different actors. 

When I start cooking, I think “what’s my carb, my protein, and my vegetables?”. 

As we learn more recipes, we can then identify what ingredients play these three roles and introduce these parts in our meals.

 

Flavor agents dress up our three parts

We’re trying to get a tasty meal at the end and we’re starting with raw ingredients. Most raw ingredients have a taste, but I think of them as a canvas for a range of flavors.

Boiling a raw potato would be rather bland so we add salt or sour cream to make it taste better. The salt and sour cream give the food additional flavors, so I’m going to call them flavor agents.

 

Cooking methods turn these parts of the meal into different forms

Cooking is the collection of methods (algorithms) for turning those raw ingredients (inputs) into good food (outputs). Here are some example outputs.

Examples of carbs

  • Steamed carbs
  • Mashed potatoes
  • Pizza dough, pita, flatbread
  • Burger bun
  • Corn tacos, arepas

Examples of proteins

  • Burgers
  • Meatballs
  • Breaded and fried fish
  • Roasted meats
  • Braised meats
  • Curries

Examples of vegetables

  • Salad
  • Sauteed vegetables
  • Roasted vegetables
  • Grilled vegetables

Examples of flavor agents

  • Vinegar
  • Ginger
  • Garlic
  • Black bean sauce
  • Spices
  • Herbs
  • Capers
  • Mayo
  • Mustard
  • Honey

The key to learning to cook (or really learning anything) is learning the next most useful skill enough to get you to the next level. You could call it agile. At that new level, you may find that you need a whole different set of skills. What’s missing is a leveling system to help guide cooking students. In the same way that when you play Pokemon Go, you feel great when you level up, you should be able to see your cooking skills level up too.

We like leveling up because we like the feeling of progress within a structure of that tells you what to do next towards a given a goal. Finding your own goal is really difficult and you may spend a lot of time wandering before you find it.

 

A great cooking course organizes cooking methods into levels and guides you through them

The cooking course that I’m imagining comes down to starting everyone at level 1 of the four different parts (carbs, proteins, vegetables, and flavors). I would split up the different methods into levels by difficult, effort, ingredients and amount of equipment needed.

The first goal would be to complete level 1 for all four parts. Then it’s really up to the individual to choose what they want to get better at.

Within the levels, the course would describe the method, give examples of what kinds of ingredients can be put into the method and have the student cook a couple recipes with this method.

When the student is bored with a recipe, then they can either cook another ingredient with the same method or move on to the next level. Since the three parts are interchangable, they can be taught independently and students can progress at their own pace based on their interest.

Now I just have to build it.

 

Designing Cooking for millenials

Before, I talked about how cooking is not designed for millenials.

  1. Cooking for one every night is inefficient.
  2. Recipes assume you have ingredients and equipment. They also aren’t visual.
  3. Cooking education relies on recipes and not techniques.

What’s the answer?

 

Cooking for one

Cooking every night takes up a lot of time and effort. If you cook every night, it’ll take about 1 hour each night. The answer is cooking many meals at once. I like to cook six on Sunday nights.

Solutions

  1. Containers. I’d tried cooking larger portions before, but that didn’t work well because I didn’t have any way to store it. The key is buying containers and preassembling the meals into containers on the day you cook so that you all you need to do is heat it up when you want to eat.
  2. Variety. People get sick of eating the same thing all week. That’s why I cook three different meals for the week and I only eat them for dinner. That way you can still buy lunch and get a wide variety of food.

 

Recipes are broken for millenials

Recipes are not designed with the needs of a millenial in mind. They don’t realized that millenials don’t have a full pantry, a lot of kitchen equipment or a lot of time. They are heavily text based.

Solutions

  1. A simple solution would be to include more data on the recipe about the number of ingredients, dishes, cooking techniques used and type of equipment it needs.
  2. We can also communicate recipes in pictures or videos instead.
  3. Recipes should be organized to maximize the amount of reusable pieces. For example, recipes that share a common set of ingredient or techniques should be grouped together.
  4. Recipes should be designed in a modular way to show what parts can be substituted.

 

Cooking education relies on recipes and not techniques

Cooking is something passed down from parents to children, but more and more, parents are not cooking, so where will children learn to cook?

People cook now by looking on Google for recipes. This leads to learning to cook by trying many different recipes. This is a very ineffective way to learn.

What are recipes?

They’re a written way to communicate instructions for making something.

The problem is that each recipe only tries to recreate a special case not communicate how the ingredients were chosen and their purpose in the dish.

Following a recipe for roast pork is like learning for the first time that 3 x 5 = 15. Sure, next time you’re asked what 3 x 5 is, you’ll know it’s 15, but you didn’t learn how to multiply. If I asked you what 4 x 6 is, you won’t have a clue. In the same way, you haven’t learned how to cook meat by learning a recipe, just the answer to a specific problem.

Solution

The answer is teaching methods and using recipes as examples of methods.

The whole point of cooking education should be to teach the 20% of techniques that make up 80% of recipes so that people can reap the benefits of cooking and start experimenting with ingredients on their own.

How does that work? Next, I’ll show the cooking curriculum I’d use to teach cooking.