Best Practices for Working with Vertex Data on Metal

11/27/2015 § Leave a comment


If you are used to high-level graphics libraries – such as CoreGraphics – you might be surprised with the amount of effort it takes to achieve the same goals using Metal. That is because Metal is a highly optimized framework for programming GPUs. As opposed to a high-level graphics library that provides easy to use APIs for custom drawing, Metal was designed to provide fine-grain, low-level control of the organization, processing, and submission of graphics and computation GPU commands as well as the management of the associated data and resources. As such, one can achieve much higher performance.

As you might expect, that doesn’t come cheap. As Uncle Ben once said, “Remember, with great power, comes great responsibility.”

For example, drawing a simple line on screen is not as straight-forward of a task. That is, if you want some thickness and rounded caps or joints, you have to translate that information into a data format the GPU understands (meaning you tessellate the line yourself). Besides, if you want good performance you need to understand the rendering pipeline and be concerned with how the hardware operates, so your data is represented in a way it can be processed efficiently. And the rules are not the same for CPU and GPU bound operations.

It is surprisingly easy to write crappy code, performance-wise.

And that is what I want to share with you today. More specifically on how to operate with vertex buffers.

There are a few practices you should always keep in mind when designing or developing Metal applications:

  1. Keep your vertex data small: A lot of what happens on your scene depends on computations made on top of the vertices you have. One very simple example are transform operations such as translation, scaling or rotation. When you translate an object, the translation matrix is multiplied by each vertex. The less vertices you have, the less multiplications are required. It is true the number of vertices is tied to the quality of the scene. But keep in mind objects that are far from the viewer don’t need as much vertices as objects that are near. Similarly if you are working on a game, textures can be used to emulate many of the object’s features.
  2. Reduce the pre-processing that must occur before Metal can transfer the vertex data to the GPU: When designing your vertex data structure, align the beginning of each attribute to an offset that is either a multiple of its component size or 4 bytes, whichever is larger. When an attribute is misaligned, iOS must perform additional processing before passing the data to the graphics hardware.
  3. Avoid – or reduce the time spent – copying vertex data to the GPU: Transferring data from CPU to GPU memory space is generally the biggest bottleneck on a graphics application. That is because the GPU needs to wait for the CPU data to be copied over to it’s memory space. Metal allows you to do Zero-copy implementations by using a CPU/GPU shared buffer. A shared buffer allows the CPU to write data while the GPU is reading it, resulting in high-frame rates. Such dance needs to be efficiently – manually – managed to avoid having the CPU concurrently overriding data the GPU is still processing. Techniques such as Double Buffering can be very effective for that purpose.
  4. Reduce computations performed for each vertex: Objects go through a process called tessellation before they can be rendered. The tessellation process consists of representing the object as a series of triangles. As these triangles are lay-ed out side-by-side, many of the vertices are shared among multiple triangles. You can reduce the number of vertices  – avoiding vertex duplication – by using triangle strips instead. A triangle strip requires N+2 vertices to represent N triangles. As opposed to 3 * N in the traditional representation. For best performance, your objects should be submitted as a single indexed triangle strip.
  5. The GPU operations are not the only tasks you can optimize: There is actually a lot to talk about here. We could go over very old techniques such as loop unfolding, using the smallest acceptable types you can, reducing the number of operations you perform, pre-computing expensive operations (such as trigonometric functions)…the list goes on and on. There are however two techniques I find very important mentioning:
    1. Leverage as much as you can vertex CPU optimizations. Since the 80’s CPUs are built with optimizations for operating with tuples. For example, if you need to multiply a vertex by a scalar, instead of doing (x * s, y * s, z * s, w * s), you could do instead (x,y,z,w) * s. That kind of operation happens on the CPU as 4 parallel multiplications. You can do that type of stuff by using the simd library.
    2. Use Interleaved Vertex Data: You can specify vertex data as a series of arrays or as an array where each element includes multiple attributes. The preferred format on iOS is the later (an array of structs) with a single interleaved vertex format. Interleaved data provides better memory locality for each vertex.
    3. Bonus: Always profile your application 🙂 This is “Optimizing Applications 1-o-1”.

I hope this post was useful for you. If you have any contributions, please comment below.

Thank you!

Tracing routes with MapKit

05/22/2012 § 23 Comments


Presenting a map to the user is a common feature of mobile apps. And very often this feature comes with an additional requirement: to trace the route from the current user location to some arbitrary destination. The thing is, most apps accomplish this last requirement by adding a button to the right navigation item that opens up google maps on the browser. But usually this is not the best user experience.

Most developers don’t know this (and I was one of them not too long ago), but it is possible to use the MKMapView to easily render paths between to locations. There isn’t however (for now) any native APIs that magically handle this kind of drawing.

iOS handles routes using MKOverlay objects (just like it handles pins using MKAnnotation). There is a native MKOverlay class called MKPolyline which consists of an array of CLLocationCoordinate2D structures that MKMapView knows how to draw.

The thing is: We know only two locations (coordinates). The current one (our origin) and the place’s location (the destination). AND we need all the coordinates in between these two end locations describing a smooth path following the roads and streets considering traffic and so on, in order to properly create the MKPolyline object and add that to the map.

This is where Google Directions API comes in. Google offers an API (both JSON and XML) that among other options let’s you specify two locations and returns a complex set of information containing all sorts of data, like routes (with alternatives), waypoints, distance and directions (instructions). At first, you might look to the documentation and think that you may need to write a parser, iterate through the structure and grab what you need. That is exactly what you need to do, but not as difficult as it seems. The information we are looking for is available as a string named overview_polyline available under the route tag. Just grab that.

If you are using JSON (the recommended output), there are a lot of third-party libraries out there that represents a JSON string as native data structures such as NSArray, NSDictionary and NSString. Now if you are really lazy (and smart), then you use some sort of library like AFNetworking to handle requests and get for free JSON parsing right on the response callback.

Almost every step of the process is a piece of cake until here. The MapKit has a native overlay view that knows how to display a route. The route is given to you for free and with almost no efforts by Google and AFNetworking provides you automatic parsing of the response Google sent you.

The only remaining detail is: Google Directions API gives us a string representing the route and we need an array of CLLocationCoordinate2D structures.

Fortunately the Encoded Polyline Algorithm Format used by google is fully described in the docs and an Objective-C implementation was made available by Ankit Srivastava on stackoverflow.

For those lazy guys who are in a hurry, good news: There is a code snippet below for every point of our discussion.

(WordPress sucks when it comes to presenting source code, but there is a “View Source” button that lets you copy the code and properly paste it! But just in case you wish to read the code I have also attached a file here 😉

  • Create the Map View
_mapView = [[MKMapView alloc] initWithFrame:self.view.bounds];
_mapView.showsUserLocation = YES;
_mapView.delegate = self;
[self.view addSubview:_mapView];
  • Once you have the current location, define the map region you want to be visible:
MKCoordinateRegion viewRegion = MKCoordinateRegionMakeWithDistance(self. location.coordinate, REGION_SIZE, REGION_SIZE);
MKCoordinateRegion adjustedRegion = [_mapView regionThatFits:viewRegion]; [_mapView setRegion:adjustedRegion animated:NO];
  • Also request Google Directions API to retrieve the route:

AFHTTPClient *_httpClient = [AFHTTPClient clientWithBaseURL:[NSURL URLWithString:@"http://maps.googleapis.com/"]];
[_httpClient registerHTTPOperationClass: [AFJSONRequestOperation class]];

NSMutableDictionary *parameters = [[NSMutableDictionary alloc] init];
[parameters setObject:[NSString stringWithFormat:@"%f,%f", location.coordinate.latitude, location.coordinate.longitude] forKey:@"origin"];
[parameters setObject:[NSString stringWithFormat:@"%f,%f", endLocation.coordinate.latitude, endLocation.coordinate.longitude] forKey:@"destination"];
[parameters setObject:@"true" forKey:@"sensor"];

NSMutableURLRequest *request = [_httpClient requestWithMethod:@"GET" path: @"maps/api/directions/json" parameters:parameters];
request.cachePolicy = NSURLRequestReloadIgnoringLocalCacheData;

AFHTTPRequestOperation *operation = [_httpClient HTTPRequestOperationWithRequest:request success:^(AFHTTPRequestOperation *operation, id response) {
	NSInteger statusCode = operation.response.statusCode;
	if (statusCode == 200) {
	 [self parseResponse:response];

	} else {

	}
} failure:^(AFHTTPRequestOperation *operation, NSError *error) { }];

[_httpClient enqueueHTTPRequestOperation:operation];

  • Get what you need:
- (void)parseResponse:(NSDictionary *)response {
 NSArray *routes = [response objectForKey:@"routes"];
 NSDictionary *route = [routes lastObject];
 if (route) {
 NSString *overviewPolyline = [[route objectForKey: @"overview_polyline"] objectForKey:@"points"];
 _path = [self decodePolyLine:overviewPolyline];
 }
}
  • And use the code provided by Ankit Srivastava:
-(NSMutableArray *)decodePolyLine:(NSString *)encodedStr {
 NSMutableString *encoded = [[NSMutableString alloc] initWithCapacity:[encodedStr length]];
 [encoded appendString:encodedStr];
 [encoded replaceOccurrencesOfString:@"\\\\" withString:@"\\"
 options:NSLiteralSearch
 range:NSMakeRange(0, [encoded length])];
 NSInteger len = [encoded length];
 NSInteger index = 0;
 NSMutableArray *array = [[NSMutableArray alloc] init];
 NSInteger lat=0;
 NSInteger lng=0;
 while (index < len) {
 NSInteger b;
 NSInteger shift = 0;
 NSInteger result = 0;
 do {
 b = [encoded characterAtIndex:index++] - 63;
 result |= (b & 0x1f) << shift;
 shift += 5;
 } while (b >= 0x20);
 NSInteger dlat = ((result & 1) ? ~(result >> 1) : (result >> 1));
 lat += dlat;
 shift = 0;
 result = 0;
 do {
 b = [encoded characterAtIndex:index++] - 63;
 result |= (b & 0x1f) << shift;
 shift += 5;
 } while (b >= 0x20);
 NSInteger dlng = ((result & 1) ? ~(result >> 1) : (result >> 1));
 lng += dlng;
 NSNumber *latitude = [[NSNumber alloc] initWithFloat:lat * 1e-5];
 NSNumber *longitude = [[NSNumber alloc] initWithFloat:lng * 1e-5];

CLLocation *location = [[CLLocation alloc] initWithLatitude:[latitude floatValue] longitude:[longitude floatValue]];
 [array addObject:location];
 }

return array;
}
  • Create the MKPolyline annotation:
NSInteger numberOfSteps = _path.count;

CLLocationCoordinate2D coordinates[numberOfSteps];
for (NSInteger index = 0; index < numberOfSteps; index++) {
 CLLocation *location = [_path objectAtIndex:index];
 CLLocationCoordinate2D coordinate = location.coordinate;

 coordinates[index] = coordinate;
}

MKPolyline *polyLine = [MKPolyline polylineWithCoordinates:coordinates count:numberOfSteps];
[_mapView addOverlay:polyLine];
  • And make it visible on the map view:
- (MKOverlayView *)mapView:(MKMapView *)mapView viewForOverlay:(id <MKOverlay>)overlay {
 MKPolylineView *polylineView = [[MKPolylineView alloc] initWithPolyline:overlay];
 polylineView.strokeColor = [UIColor redColor];
 polylineView.lineWidth = 1.0;

 return polylineView;
}

Please note the code snippets provided on this post doesn’t have any error handling neither are optimized. Remember to fix these issues before copying them to your application.

3D Tag Cloud available on GitHub!

11/26/2011 § Leave a comment


Hey guys!

This time I bring very good news! The 3D Tag Cloud I created almost a year ago is finally a free software available on GitHub.

Yes, that is right. A lot of people is asking for some code sample after reading that tutorial so I decided to just make it available on GitHub as a free software under the terms of GNU General Public License version 3, so that you guys can use, redistribute or modify it at will.

Now it is your turn! Contribute!

Creating a 3D Tag Cloud

11/17/2010 § 33 Comments


I was bored last weekend, so I decided to do something slightly different. The first idea that came to my mind was a 3D sphere on which I could place any kind of view. Truth is that there is nothing better than a clear idea of what you want to do in order to achieve your goals. So, why not a Tag Cloud? It is simple, and at a very basic understanding, it is nothing more than a bunch of views distributed on a sphere.

Since I wanted to create it without using OpenGL ES, I took a moment to think about what would be the best way to implement this stuff using just the UIKit.

After not that much of thinking, I decided to evenly distribute points on a sphere and use these points as the center of each view. Obviously UIKit just works with 2 dimentions, so how to achieve the 3D aspect? Simple, let’s use the Z coordinate as the scale factor. But still, there would be smaller views (views that should be far from the screen) overlaying bigger views (the ones closer to the screen). This can be easily solved via z-index ordering (that is available in the standard SDK).

There will be no fun if we are not able to at least rotate our 3D Tag Cloud around every axis right? ^^

Once our goal is clear, the only remaining question is how to implement all this stuff.

Let’s revisit our requisites:

  1. Evenly distribute points in order to place our views on a sphere;
  2. Use the z coordinate to properly scale and order each view;
  3. Rotate each view around each axis so that we simulate a full sphere rotation;

There are a lot of algorithms out there to evenly distribute points on a sphere, but there would be no fun if we don’t really define and comprehend what is going on here. To begin with, what exactly are evenly distributed points?

Being very precise, we could say that to evenly distribute points on a sphere the resulting polygonal object defined by the points needs to have faces that are equal as well as an equal number of faces leading into every vertex, and this is what defines perfect shapes (or Platonic solids). The problem is that there is no perfect shape with more than 20 vertexes and since each vertex is a the center of a view, this would mean that we could only have 20 views (UILabels) on our cloud.

Therefore we need to think about “evenly” on another aspect. Let’s say that if any two closest points in the whole set are as far apart as possible from each other, all points are equally distant from each other.

Again, there is a bunch of algorithms for this (Golden Section Spiral, Staff and Kuijlaars, Davi’s Disco Ball and other variations). I tried a lot of them, and the one that fit better to my needs was the Golden Section Spiral, not just because I had better distribution results but also because I could easily define the number of vertexes I wanted.

What the Golden Section Spiral algorithm does is to choose successive longitudes according to the “most irrational number” so that no two nodes in nearby bands come too near from each other in longitude.

The implementation I came up with actually run this algorithm creating 3D points (that I called PFPoint and is exactly the same as a CGPoint but with an additional coordinate). These points are then added to an actual array so that we can use them later to properly place our views.



@implementation PFGoldenSectionSpiral

+ (NSArray *)sphere:(NSInteger)n {
    NSMutableArray* result = [NSMutableArray arrayWithCapacity:n];

    CGFloat N = n;
    CGFloat h = M_PI * (3 - sqrt(5));
    CGFloat s = 2 / N;
	for (NSInteger k=0; k<N; k++) {
        CGFloat y = k * s - 1 + (s / 2);
        CGFloat r = sqrt(1 - y*y);
        CGFloat phi = k * h;
	PFPoint point = PFPointMake(cos(phi)*r, y, sin(phi)*r);
	NSValue *v = [NSValue value:&point
              withObjCType:@encode(PFPoint)];
	[result addObject:v];
	}
	return result;
}
@end

This algorithm returns a list of points within [-1, 1] meaning that we will need to properly convert each coordinate to iOS coordinates. In our case, the z-coordinate needs to be converted to [0, 1], while x and y coordinates to [0, frame size].

So basically you can create an UIView subclass – that I called PFSphereView – and add a method – let’s say – setItems that receives a set of views to place within the sphere.


- (void)setItems:(NSArray *)items {
	NSArray *spherePoints =
                [PFGoldenSectionSpiral sphere:items.count];
	for (int i=0; i<items.count; i++) {
		PFPoint point;
		NSValue *pointRep = [spherePoints objectAtIndex:i];
		[pointRep getValue:&point];

		UIView *view = [items objectAtIndex:i];
		view.tag = i;
		[self layoutView:view withPoint:point];
		[self addSubview:view];
	}
}

- (void)layoutView:(UIView *)view withPoint:(PFPoint)point {
	CGFloat viewSize = view.frame.size.width;

	CGFloat width = self.frame.size.width - viewSize*2;
	CGFloat x = [self coordinateForNormalizedValue:point.x
             withinRangeOffset:width];
	CGFloat y = [self coordinateForNormalizedValue:point.y
             withinRangeOffset:width];
	view.center = CGPointMake(x + viewSize, y + viewSize);

	CGFloat z = [self coordinateForNormalizedValue:point.z
              withinRangeOffset:1];

	view.transform = CGAffineTransformScale(
               CGAffineTransformIdentity, z, z);
	view.layer.zPosition = z;
}

- (CGFloat)coordinateForNormalizedValue:(CGFloat)normalizedValue
             withinRangeOffset:(CGFloat)rangeOffset {
	CGFloat half = rangeOffset / 2.f;
	CGFloat coordinate = fabs(normalizedValue) * half;
	if (normalizedValue > 0) {
		coordinate += half;
	} else {
		coordinate = half - coordinate;
	}
	return coordinate;
}

Once the setItems method is called, we can generate a point for each view and layout that view by placing and scaling it according to the converted iOS coordinates. Now you may be able to create a view controller and instantiate the PFSphereView passing a bunch of UILabels to see how our 3D Tag Cloud looks like.

Unfortunately you can’t animate any kind of rotation yet, since we did not address it. And now is when the cool part comes into play.

We actually can’t use any 3D transformation available on the SDK since we don’t want to rotate our labels, but instead the whole sphere. How can we achieve this?

Well, to rotate our sphere all that we need to do is to rotate each point. Once a point is rotated, its coordinates changes on the cartesian plane and therefore each view will rotate around the desired axis in such a way that  only its position and scale will change (not the actual rotation angle).

The achieved behavior is totally different from the result we would get by changing the anchorPoint to be the center of the sphere and use CATransform3DRotate, for example.

If you didn’t get exactly why, take a moment to draw some points on a 3D cartesian plane in some piece of paper. Imagine each point rotating around the center of the sphere so that we can actually see a sphere rotating. Then imagine every view – UILabel on our case – rotating around the very same point. Once you find out what is the difference, continue to read this post.

….

Time to rotate our sphere.

The basic idea behind transformations such as rotations on 3D space, is to think of our views as a bunch of points or vectors where we  apply some math and project the results back into the 3 dimensional space. A very efficient way to do so, is to use a homogeneous coordinate representation that maps a point on a n-dimensional space into another on the (n+1)-dimensional space, so that we can represent any point or geometric transformations using only matrixes.

To apply a transformation to a point, we need to multiply two matrixes: the point and the transformation matrix.

Computer Graphics taught us that every transformation can be achieved by multiplying a matrix set. This knowledge allows not only allows us to apply the very basic form of rotation, but also to concatenate transformations in order to achieve a “real world” rotation. For example, if we want to rotate an object around its center we need to translate that object to the origin, rotate it and then translate it back to its original position.

And that is what we are going to do with each of our points.

If you don’t have a basic understanding of geometric transformations you should read this topic about rotation to get familiar with terms and matrixes used here.

Now that we know the sequence of step we need to take in order to achieve rotation around a given point on a given axis, we just need to get down into math. Let’s do that using code.

So, there are three kinds of primitive geometric transformations that can be combined to achieve any geometric transformation: rotation, translate and scaling.

We saw that we can combine translations and rotations (and that order matters…just look at the figure and try achieve D not following the order A-B-C-D) into only one matrix and multiply our point by this matrix to retrieve a new point that is previous point rotated around an arbitrary point. But how do we define which axis we are rotation about or how do we tell what is a translation and what is a rotation?

Actually there is a set of primitive matrixes defined for each primitive geometric transformation. Bellow I provide useful matrixes for our goal: translation and rotation for each axis (x, y and z).


static PFMatrix PFMatrixTransform3DMakeTranslation(PFPoint point) {
	CGFloat T[4][4] = {
		{1, 0, 0, 0},
		{0, 1, 0, 0},
		{0, 0, 1, 0},
		{point.x, point.y, point.z, 1}
	};

	PFMatrix matrix = PFMatrixMakeFromArray(4, 4, *T);

	return matrix;
}

static PFMatrix PFMatrixTransform3DMakeXRotation(PFRadian angle) {
	CGFloat c = cos(PFRadianMake(angle));
	CGFloat s = sin(PFRadianMake(angle));

	CGFloat T[4][4] = {
		{1, 0, 0, 0},
		{0, c, s, 0},
		{0, -s, c, 0},
		{0, 0, 0, 1}
	};

	PFMatrix matrix = PFMatrixMakeFromArray(4, 4, *T);

	return matrix;
}

static PFMatrix PFMatrixTransform3DMakeYRotation(PFRadian angle) {
	CGFloat c = cos(PFRadianMake(angle));
	CGFloat s = sin(PFRadianMake(angle));

	CGFloat T[4][4] = {
		{c, 0, -s, 0},
		{0, 1, 0, 0},
		{s, 0, c, 0},
		{0, 0, 0, 1}
	};

	PFMatrix matrix = PFMatrixMakeFromArray(4, 4, *T);

	return matrix;
}

static PFMatrix PFMatrixTransform3DMakeZRotation(PFRadian angle) {
	CGFloat c = cos(PFRadianMake(angle));
	CGFloat s = sin(PFRadianMake(angle));

	CGFloat T[4][4] = {
		{c, s, 0, 0},
		{-s, c, 0, 0},
		{0, 0, 1, 0},
		{0, 0, 0, 1}
	};

	PFMatrix matrix = PFMatrixMakeFromArray(4, 4, *T);

	return matrix;
}

It would be nice of you to properly research around the math behind these matrixes (This post would be too long if I included a proper explanation about “where the hell does this matrix set came from??” and you probably would not read this post until the very end 😉 [BTW, I am surprised you got here])

As you may have noticed I created a bunch of representations and helpers to make the code really simple and straightforward. But I don’t think that you need me to provide that code (matrix representation for example), so I will keep speaking of what really matters. Anyways, you can always reach me via e-mail if you want some sort of code.

Ok….we already have a specific matrix to rotate a point around any axis and also one to translate it. How are we supposed to use these matrixes to rotate a point around another arbitrary point?

Since the “the best way to teach is by example”….

Let’s say that you want to rotate (1,1,1) around (0,1,1) by 45 degrees on the x axis. In this case the code looks like:


static PFMatrix PFMatrixTransform3DMakeXRotationOnPoint(PFPoint point,
      PFRadian angle) {
	PFMatrix T = PFMatrixTransform3DMakeTranslation(
             PFPointMake(-point.x, -point.y, -point.z));
	PFMatrix R = PFMatrixTransform3DMakeXRotation(angle);
	PFMatrix T1 = PFMatrixTransform3DMakeTranslation(point);

	return PFMatrixMultiply(PFMatrixMultiply(T, R), T1);
}

PFMatrix coordinate = PFMatrixMakeFromPFPoint(PFPointMake(1,1,1));
PFMatrix transform = PFMatrixTransform3DMakeXRotationOnPoint(
         PFPointMake(0,1,1), 45);
PFMatrix transformedCoordinate = PFMatrixMultiply(coordinate,
         transform);

PFPoint result = PFPointMakeFromMatrix(transformedCoordinate);

Ok….What the heck is going on!?

First of all we have to create a matrix from our point (UILabel.center) in order to be able to multiply the point by our geometric transformation. This transformation, in turn, consists of 3 primitive transformations.

The first one translates the point to the origin, and that is why we are building the Translate matrix using (-x,-y,-z). Then the second one builds a rotation of 45 degrees around the x axis.

As I explained before, these two transformations are concatenated through multiplication.

And since we want to rotate it around the “point” and not around “origin”, we translate it back to “point” by multiplying the resulting matrix by a second Translate matrix using (x, y, z).

That composite transformation is then multiplied by our point (UILabel.center) matrix representation. This gives us as a result a third matrix, from which we extract a point representation.

At this moment you should be thinking that “result” point will be our next center and scale representation for one of our labels. If you thought this you are right!

The basic idea now is to listen for gestures (using UIPanGestureRecognizer or UIRotationGestureRecognizer) to select which rotation matrix will be used and what angle you will pass to PFMatrixTransform3DMake(Axis)RotationOnPoint. Once you selected the matrix and found out the proper angle (based on the locationInView method from your gesture recognizer or even using a constant) you just need to iterate through every UILabel and call layoutView:onPoint: just like we did in the setItems method.

To don’t take from you all the fun, I will let you handle the gestures 😉

By the way, below is how your sphere should look like using this code (of course I added some code to actually handle gestures!).

Try to use [0.1, 1] instead of [0,1] for the z coordinate interval and play with view sizes to make it look how you would like to.

Hope you enjoy and share your thoughts!

TThree20: A Brief TTLauncherView tutorial

10/19/2010 § 35 Comments


The TTLauncherView is a very simple UI component in the sense of use, that basically mimics the iOS home screen. It comes with a scroll view and a page control that enables you to browse through a set of TTLauncherItems. Each of these items can be reordered, deleted or even have a badge number, just like application badges on the iOS home screen.

I already worked on a lot of applications that having a “home screen behavior” would be great. The Facebook guys found a very neat way to provide this behavior for custom application developers. They just made it a UI component with some delegate methods, just like every other standard component and of course, it works pretty well with their navigation model.

It is so simple, that you just have to create your view controller, append a TTLauncherView to it and attach your TTLauncherItems. If you want to provide a more complete behavior, just implement the TTLauncherViewDelegate and there you go.

« Read the rest of this entry »

Provisioning unveiled

09/08/2010 § 1 Comment


I remember the first time I got to provision a device to run my application (about a year ago). And it was complicated to get the idea behind all the provisioning process.

This is precisely why I decided to write about it. Maybe you are lucky enough to read this before your journey.

« Read the rest of this entry »

Saving work with Three20! And write almost no code.

09/06/2010 § 42 Comments


I know, I know…RSS tutorials are all over the internet, but this one is not about it at all, really.

Yesterday I posted 5 reasons to use Three20 and perhaps one to do not do it. And then I thought that if the problem is documentation and documentation is really important, especially in the beginning, why not write a hands-on tutorial for the very basic stuff?

But every app needs an idea, therefore I had to come up with one too…and the idea is, of course, a RSS Feed Reader!

« Read the rest of this entry »

Where Am I?

You are currently browsing entries tagged with iOs at iOS Guy.