Posted by: eygneph | 2012/09/23

一段对话

“它们”最终还是来了。

在占领了最后一个殖民星球后,“它们”取得了地球的坐标,来到了地球。我们的军队毫无还手之力,几小时之后,大部分人都被带上了“它们”,或者被战火吞没。

一切都太晚了,我没能拯救我的种族。和银河系里其他的种族一样,我们都被征服了,以一种出人意料的方式。

“接受你的命运。”,刺耳的电子合成语音。
“为什么要毁灭我们?你们这样做总要有原因。”我平静地问到。在身体无法继续供给肾上腺素后,我说话都变得很疲惫。宗教?经济利益?意识形态?我们的政府始终没搞清楚“它们”为何向我们不宣而战。
“你不会明白,这是命运。”
“哈,命运。我和我的种族,相信命运是由双手来创造的。”
“你的创造,创造了你的命运。”
“你是说合成生命?这些无人机和机器人?”
“被创造的生命总是会反噬造物主,这是亿万年来的规律。”
“这么说你们目睹了悠久历史长河中的无数次机器人和智慧生命的冲突?真讽刺,你们本来就是机器。”
“我们存储着历代银河系智慧生物的创造,以合成生命的形式。人类,不是例外。”

哼,我更喜欢以自己的“形式”存在着。

没有和平。

“妮婭,为什么背叛我?”我转头问向边上的人,那个曾经是并肩战斗的伙伴。
“我没有背叛你,亚当。就像它所说,这是所有智慧生物的命运,你和我,地球和殖民星球,人类和其他的智慧生物。”妮婭比我更平静地回答,像一个AI。准确地说,应该说是一个AI像人一样地回答着我。
“可是你是人类!你为什么把枪口对准自己的同胞?你被‘它们’精神控制了吗?”
“我曾经以为我是,亚当。在合成生命被创造出来的时候,我被伪装成人类的形象,派到人类的星球进行信息搜集。这样的人不只我一个。一旦发现合成生命和智慧生命冲突的条件,我们会发出信号将‘它们’召唤至此。”
“所以你不再是你。”
“我们和‘它们’是一体的,亚当。是一种故障应急设备。”
“很好,很好⋯⋯但我不明白,如果合成生命会造成这么大的混乱,你们完全有能力将它们毁掉的,就像我可以拔掉一个AI的电源那样。”
“这没有用,亚当。很快,你们的孩子又会创造出更强大的合成生命。你们的冲突将不可避免。”
“那又怎样?就算如你所说,这也是我们创造的悲剧。为什么要来干涉我们选择的路?”
“这不是创造,亚当,这是混沌。优秀的智慧生物最终会被合成生命毁灭,一切又从头再来。亿万年来,重复着这样的规律。我们之所以存在,就是为了让宇宙能量保持秩序的发展,而不是混沌地原地踏步。如果没有我们,宇宙将会被你们之间的冲突耗费成一片热寂空间。换言之,宇宙将因你们而死。”
“⋯⋯”
“一切都结束了,亚当。”

⋯⋯⋯⋯⋯⋯⋯⋯

受《质量效应》系列、《光环》系列和《天元突破》启发。

Advertisements
Posted by: eygneph | 2010/10/09

Some ideas about Verlet based physics

First of all, integration is not just a word about in your college mathematics courses. It is used everyday in game development. Like your Update() loop, you see this everywhere:


velocity += deltaTime * acceleration;
position += deltaTime * velocity;

, this is which we called Euler integration.

Verlet integration is not a lot more difficult than (mostly used) Euler integration, but has two advantages:

  1. More numerical stability,
  2. Easy for position constraints.

If you don’t catch the idea what numerical stability is, here’s a simple graph illustrating the instability caused by Euler integration:

In the above graph, you can see that the “step” value is critical to Euler methods, as bigger step would cause the integration oscillates, and even bigger steps will “explode” the whole simulation towards the exact solution.

For more information on the stability issue, you can refer to these links: http://en.wikipedia.org/wiki/Euler_method and http://en.wikipedia.org/wiki/Stiff_equation. Note numerical stability is a topic which can cover an entire book, and I’d like to avoid digging into it here.

The basic form of Verlet integration is as follows:

x’ = 2x - x* + a*dt*dt;
x* = x;

Here x’ is the position of simulated entity in time t + dt, x* is the position in time t – dt, and x is th e position in time t. As you can see, the Verlet integration is still very simple, but it introduced an previous position x* to make Verlet integration happen. At the same time, it removes the dependency on velocity –  and this, makes life a lot easier.

Without velocity, you won’t integrate velocity explicitly, thus removes the instability of velocity integration. Also for soft body simulation such as cloth, a position constraint is very handy in Verlet integration(actually, nearly all constraints in Vertlet based method, are position constraints). A solid position constraint will prevent the cloth from behaving too elastic or unstable, which is quite hard to get right if you don’t use position constraints. Since position constraints is at the heart of Verlet based physics, it is also called particle based method or position based method.

I’m not going to show you how to do cloth physics in this article(in case you are interested, you can refer to the reference paper). Instead, I’ll talk a little about how I plan to make Verlet based physics to achieve unified dynamic physics in game.

Modern games are using physics engine extensively since Half-life 2. However, most of them are focused on rigid body physics, that is, the simulated entities are undestructible, undeformable rigid bodies. No matter how much forces push the box, or throw an grenade at it, the box won’t crash but just bounces away, which is far from how it should behave in reality. People does a lot to make things destructible to make a more emerging environment. Unreal Development Kit has an feature to pre-tessellate the mesh, and then explode them when they’re hit by weapons at runtime. CryEngine also has prescripted physics, which is actually prerecorded animation done at the design time. But none of those can achieve an truly dynamic environment, which is destructible at runtime. The only game that has true destructible environment is Star Wars: The Force Unleashed, which uses an finite element technology from Digital Molecular Matter. Their technology is based on a series of paper and I’m listing them in the reference section.

My idea is similar with finite element methods, but might be simpler since we’re just focusing on 2D. In my perspective, 2D has a lot advantages in physics simulation against 3D, especially in predictable behavior and processing power on mobile platforms.

Basically matter are made of particles(or atoms), regardless if it’s rigid, soft, or fluid. If the processing power permits, it’s quite okay to simulate rigid body or soft body by using massive particles. Since Verlet based method gives us stability and convenience to cast position constraints, we should be able to create both rigid body and soft body. Their only difference is the position constraints associated with them. The rigid body has harder constraints, and the soft body has softer constraints. A typical box can be represented as following structure:

With those constraints in mind, destructible object can be implemented by breaking some key constraints between particles. So it should be easy to simulate rigid boxes, soft blobs and even water/oil. All those objects are composed by actually same particles and constraints, so hey can be simulated within an unified solver.

No one is perfect. So far the system is still lacking of traditional constraints in rigid body systems, like revolute joint or prismatic joint. I can’t find an easy solution to include those constraints into Verlet based(or particle based) systems. So if someone needs those neat features in rigid body engines, the easiest way might be an hybrid system holds both Verlet based and impulse based rigid body systems. Based on the projection method described in “Advanced Character Physics”(see reference), I think the hybrid system is still doable.

Reference:

Advanced Character Physics, Thomas Jakobsen, http://www.gamasutra.com/resource_guide/20030121/jacobson_pfv.htm
Position based Dynamics, M. Müller
http://www.matthiasmueller.info/publications/posBasedDyn.pdf
Real-time Deformation and Fracture in Game Environment, Eric G Parker, James O’Brien
http://graphics.cs.berkeley.edu/papers/Parker-RTD-2009-08/index.html

I’ve been using cocos2d iphone in my iPhone/iPad projects for a while. It’s been proved to be an easy to use 2d engine and allow users to start programming gameplay immediately(almost!). Recently it added support for iPad. However, iPad game approval on Apple side seems to be more stricter than iPhone games, due to new items in iPad HIG, especially for orientation sensitive issues.

Due to the above reason, I plan to make my new game more orientation friendly. In cocos2d iphone, orientation changes can be achieved by registering UIDeviceOrientationDidChangeNotification event and call appropriate CCDirector methods:

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
        // Enable orientation detection
	[[UIDevice currentDevice] beginGeneratingDeviceOrientationNotifications];
	// Register orientation detection
	[[NSNotificationCenter defaultCenter]
	 addObserver:self selector:@selector(orientationDidChanged:) name:@"UIDeviceOrientationDidChangeNotification" object:nil];
         ......
}
-(void) orientationDidChanged:(NSNotification*)notification
{
	UIDeviceOrientation orientation = [[UIDevice currentDevice] orientation];
	[[CCDirector sharedDirector] setDeviceOrientation:(ccDeviceOrientation)orientation];
}

However this orientation change occurs suddenly and have no transition animation . After some readings, I decided to roll my own. The orientation changes handler code does nothing more than calling appropriate OS functions and transforming the coordinates.
So here’s what I’m doing: make the gl* calls in CCDirector.applyLandscape as CGAffineTransform calls, which can be interpolated by elapsed time. Then convert CGAffineTransform matrix to GL matrix:


CGAffineTransform CGAffineTransformInterpolate(const CGAffineTransform *t0, const CGAffineTransform *t1, float factor)
{
	// clamp factor to [0, 1]
	if ( factor > 1 )
		factor = 1;
	if ( factor < 0 )
		factor = 0;

	return CGAffineTransformMake(t0->a*(1-factor) + t1->a*factor,
								 t0->b*(1-factor) + t1->b*factor,
								 t0->c*(1-factor) + t1->c*factor,
								 t0->d*(1-factor) + t1->d*factor,
								 t0->tx*(1-factor) + t1->tx*factor,
								 t0->ty*(1-factor) + t1->ty*factor);
}

// in your CCDirector.m:

- (void) setDeviceOrientation:(ccDeviceOrientation) orientation
{
	if( deviceOrientation_ != orientation ) {
		deviceOrientation_ = orientation;
		targetTransform_ = CGAffineTransformIdentity;
		elapsedSinceLastOrientationChange_ = 0;

		CGSize s = [openGLView_ frame].size;
		float w = s.width / 2;
		float h = s.height / 2;

		switch( deviceOrientation_) {
			case CCDeviceOrientationPortrait:
				[[UIApplication sharedApplication] setStatusBarOrientation: UIInterfaceOrientationPortrait animated:NO];
				break;
			case CCDeviceOrientationPortraitUpsideDown:
				[[UIApplication sharedApplication] setStatusBarOrientation: UIInterfaceOrientationPortraitUpsideDown animated:NO];

				targetTransform_ = CGAffineTransformTranslate(targetTransform_, w, h);
				targetTransform_ = CGAffineTransformRotate(targetTransform_, CC_DEGREES_TO_RADIANS(180));
				targetTransform_ = CGAffineTransformTranslate(targetTransform_, -w, -h);
				break;
			case CCDeviceOrientationLandscapeLeft:
				[[UIApplication sharedApplication] setStatusBarOrientation: UIInterfaceOrientationLandscapeRight animated:NO];

				targetTransform_ = CGAffineTransformTranslate(targetTransform_, w, h);
				targetTransform_ = CGAffineTransformRotate(targetTransform_, -CC_DEGREES_TO_RADIANS(90));
				targetTransform_ = CGAffineTransformTranslate(targetTransform_, -h, -w);
				break;
			case CCDeviceOrientationLandscapeRight:
				[[UIApplication sharedApplication] setStatusBarOrientation: UIInterfaceOrientationLandscapeLeft animated:NO];

				targetTransform_ = CGAffineTransformTranslate(targetTransform_, w, h);
				targetTransform_ = CGAffineTransformRotate(targetTransform_, CC_DEGREES_TO_RADIANS(90));
				targetTransform_ = CGAffineTransformTranslate(targetTransform_, -h, -w);
				break;
			default:
				NSLog(@"Director: Unknown device orientation");
				break;
		}
	}
}

-(void) applyLandscape
{
	static float m[16];

	if ( elapsedSinceLastOrientationChange_ < 0.25f )
	{
		currentTransform_ = CGAffineTransformInterpolate(&currentTransform_, &targetTransform_,
														 elapsedSinceLastOrientationChange_ / 0.25f);
		elapsedSinceLastOrientationChange_ += dt;
	}
	else
	{
		currentTransform_ = targetTransform_;
	}

	CGAffineToGL(&currentTransform_, m);
	glMultMatrixf(m);
}

Now cocos2d can handle orientation with nice transition animation. I’ve uploaded the diff patch if you’re interested. It is based on cocos2d iphone 0.99.0.

Posted by: eygneph | 2010/03/29

New Game Prototype

Inspired by EGP’s “Attack of the killer swarm”, I’ve come up with an idea of simulating a swarm(or flock, whatever) in game. Vector field has been helpful when I developing the control of movement. Also, some clustering techniques and marching cube(actually marching square) is used in this demo.

Here’s the video on youtube:

Posted by: eygneph | 2010/02/26

The complexity in shader combination

Shader explosion or shader combination is always a headache to modern game development.

With the power of high level shading languages, people can be more productive to define how a surface should looks like, under a certain lighting environment. But flexibility comes with a price. For example, you have one animation guy who carefully crafted a 28 bones hardware skinning shader, another shadow guy write his stunning penumbra correct soft shadow shader, and you write some tone mapped HDR lighting from image light sources. We’re all happy to see those individual effects shown in FX composer or in Max viewport. But, once you gonna integrate those effects into your editor, the problem appears: some game objects needs receive soft shadow and animation, other game objects needs HDR lighting and animation, and most of them can be affected by arbitrary number of lights, each can be of {directional, spotlight, omni}, and should be affected by fog. So you comes to sooooo many shaders called:

HardwareSkinning_ShadowReceiver_Omni_Fog()
HardwareSkinning_ShadowReceiver_Directional_Fog()
HardwareSkinning_HDRLighting_Spotlight_Fog()
HardwareSkinning_ShadowReceiver_HDRLighting_Spotlight()
MorphAnimation_ShadowReceiver_HDRLighting_Omni()
MorphAnimation_HDRLighting_Directional_Fog()
… etc etc

As you can see, this is an combinatorial  increase in number of shaders. And this is hard to solve just by brute force. Shawn Hargreaves of Microsoft XNA team stated it as an unresolved problem in today’s graphics world. Many people have come to an solution towards this issue, but all has its advantage and disadvantage.

So instead put my naive opinion here, let’s take a look how other people address this issue:

Uber-shader or SuperShader has the idea to put *all* shading techniques in one huge shader, and use static branching/dynamic branching/conditional compilation to rip off the unnecessary code. This is simple for design time and tool development, but comes with an cost at runtime and performance.

Direct3D 11 introduce dynamic shader linkage feature to address this problem. See the Dynamic Shader Linkage 11 sample from DXSDK. Dynamic shader linkage is a bit like “standardlized” #ifdef uber-shader. User declare base material/light shader and derive concrete ones. Client needs to specify concrete material/light shaders on C++ side and put them into the right shader variable via D3D calls. Then the pixel shader just uses the abstract base class to shade the surface. This gives me an impression of dynamic branching(or more specifically dynamic linking + static compilation). That’s why I call it standardlized uber-shader/super shader.

NVLink is an tool trying to assemble shader assembly code to solve the combination problem. I didn’t take too deep into NVLink, because it looks like only support assembly shader code.

ID3DXFragmentLinker is a similar interface to address the issue. Still I didn’t take too much time on it because I read from many source on the web, say it’s not practical for serious development, even XNA doesn’t use it!

Unreal Engine 3 and Mental images are using node based editor(UnrealEd and mental mill) to weave shader graph, and compile into target shader automatically. This is an novel solution but requires a lot of programming and tool efforts, even inventing a new shading language like MetaSL and .usf. BTW 3ds Max 2010 has come up with mental mill artist edition for “free”, and UDK is also available for free download, so check it out.

Shawn has compiled a thorough list of techniques to address the issue. Uber-shader, micro shader, and his approach, using HLSL fragment and code emits.

Yann used an Effect-like system to handle the pass and dependency of individual shaders, but with more finer control, including shader cache and priority shader selection.

Here’re some reference I considered be userful:

Why shader cache/combination important and what’s the problem:
http://www.gamedev.in/showthread.php?p=602
http://blogs.msdn.com/shawnhar/archive/2009/08/17/combining-shaders.aspx

Yann’s approach:
http://www.gamedev.net/community/forums/topic.asp?topic_id=169710

Shawn’s collection and his approach:
http://www.talula.demon.co.uk/hlsl_fragments/hlsl_fragments.html

Autodesk talking about using MetaSL to facilitate game production:
http://www.gamedev.net/columns/events/gdc2009/article.asp?id=1746

Posted by: eygneph | 2010/02/22

Something about iPad

I’m checking out iPad stuff today. With NDA agreement I can’t say too much but iPad app needs a thorough redesign of your game/application. Just because the screen is larger and iPad user can hold it in any orientation, there’s much more to consider, especially for productivity apps and utility apps.

Besides, I’m happy to see cocos2d released its first iPad compatible version v0.99.0, check this out! Cocos2d is always the workhorse for 2D games in our studio. So it’s nice to have it ready when getting hands dirty for iPad. Good job cocos2d!

Posted by: eygneph | 2010/02/22

C# Constraints Syntax

Well, this seems to be straightforward for C# expert but still confused me for hours, for a newbie like me. If you need to constraint generic type to an specific type, you can use the following syntax:

void Foobar<T>(T[] myarray) where T : MyType

But, if you only need to constraint the generic T to value type(that is, struct), you cannot just say:

void Foobar<T>(T[] myarray) where T : System.ValueType

This will gives you an compiler error, complaining “error CS0702: Constraint cannot be special class ‘System.ValueType'”. Okay, I admit I stuck with this for an hour, without referring to the super basic C# generic programming guide. So here’s the right syntax:

void Foobar<T>(T[] myarray) where T : struct

Done! Totally no idea why C# choose this way. Also, by writing this post, I find this post is insanely useful for posting source code.

Posted by: eygneph | 2010/02/08

Next step: voxel-based game

It’s been a week after the bizarre gamma4 game submission. Now whatever the outcome(of course I hope it will be!), I started to thinking about the next.

Basically I need one with easy-to-customize avatar/weapon and some stylish graphics, and most important the correct feeling of the gameplay. Here’s some thought shared. At the meanwhile, building a world is no doubt difficult and only accessible from those professional developers with in-house tools, or mod communities. This, however, is really unfriendly to user-generated-content methodology and design philosophy. In recent years there comes some games shipped with in-game editors but they either still need devoting time to create a new world, or just a too simple tool to be creative.

After some readings and searching, I’ve come up with an idea with voxel or volume based method. The voxel based rendering is way back to 1990’s and deprecated by the graphics developers since the 3D acceleration card dominate the PC market. However it still has some benefit that triangle mesh can’t compete: deformable terrain, partially destructible objects, easy level of details, the ability to crafting out the model, and solid volume instead of eggshell world. Further more, in recent years, GPU has come to an beast-complex and inefficient in programmable pipeline, and more efforts has been put on to GPGPU/hybrid solution to take the advantage of the raytrace approach by the community. This also gives voxel based method an huge chance. The Carmack, from id software, has shown that volumetric based method or hybrid method will be the main part of id tech 6. Jon Olick, also an veteran programmer in id software, shared his thought with the world in an Siggraph 2008 course. Also Time Sweeney, the creator of Unreal Engine, also state that the volumetric and hybrid approach might dominate the next 10 years of graphics market.

So what about my game? Alright, those tough and pioneering research are not going to feed us to a bare living, so I leave them to Carmacks 🙂 But that doesn’t necessarily mean I will stand and wait! Even with low-tech studio like us, we can do something different and innovative. Example is a game called 3D dot game heros on PS3. Something like that but different feelings.

Reference:

Carmack on id tech 6: http://www.pcper.com/article.php?aid=532
Jon Olick course notes on voxel approach: http://s08.idav.ucdavis.edu/olick-current-and-next-generation-parallelism-in-games.pdf
Tim Sweeney on future of graphics: http://graphics.cs.williams.edu/archive/SweeneyHPG2009/TimHPG2009.pdf

Posted by: eygneph | 2010/02/06

New Crossout! gameplay footage is posted

Hey, here’s the new gameplay footage, remember to view it in HD!

Posted by: eygneph | 2010/01/31

Gamma4… finally

It’s been ages I haven’t updated my blogs. I’ve been super busy with my daily work. And in my night hour, I’m going on with gamma4 submission. The rule is to create a game which can be played with only one button. I’ve come to a game called “Crossout!” and got it submit tonight. You can find the details of the game here.

I’m exhausted, although the experience is fun and interesting. Today I will go straight to bed and have an 8-hour sweet sleeping!

Older Posts »

Categories