Friday, July 2, 2010

Transmission Comparisons

Some are curious about gearing ratios, torque capacity and differences among various transmissions/transaxles used in many of today’s performance cars. So here here is a list that I may add on to later:

Transmission Tq. Cap. (ft. lb.) 1st 2nd 3rd 4th 5th 6th
Tremec T-56 Magnum (2.66 1st) 700 2.66 1.78 1.30 1.00 0.80 0.63
Tremec T-56 Magnum (2.97 1st) 700 2.97 2.10 1.46 1.00 0.74 0.50
Tremec T-56 Viper-spec 750 2.66 1.78 1.30 1.00 0.74 0.50
Tremec T-56 Viper-spec (taller 5th and 6th) 750 2.66 1.78 1.30 1.00 0.80 0.62
Getrag 6MTI500/MT82   3.66 2.43 1.69 1.32 1.00 0.65
Ford 6R80 (Automatic) 800 4.17 2.34 1.52 1.14 0.87 0.69
Tremec T-5 300 2.95 1.94 1.34 1.00 0.63 NA
Ricardo 6-speed Transaxle (Ford GT) 500 2.61 1.71 1.23 0.94 0.77 0.63
Tremec TKO 600 600 2.87 1.89 1.28 1.00 0.64 NA
Tremec TKO 500 500 3.27 1.98 1.34 1.00 0.68 NA
Tremec TR-6060 600 2.97 1.78 1.30 1.00 0.80 0.63
ZF 6HP26 326 4.17 2.34 1.52 1.14 0.87 0.69
ZF 6HP28 444 4.17 2.34 1.52 1.14 0.87 0.69

Also, it is worth nothing that the actual gear ratios employed in a particular vehicle application may change per manufacturer choice. The typical reason may be the engine setup and target fuel economy per vehicle class. For example, to help get fuel economy ratings down within regulation, a manufacturer may select taller gears, especially in the case of the last 2 gears. The same is true for final drive gear ratio.

Below are a several example tire and gearing setups showing MPH to RPM.

Tire Dia.: 26.13
1st Gear Ratio: 2.97
Final Gear Ratio: 3.73

Engine
Speed
(RPM)

Wheel
Speed
(MPH)

8000

56.14

7500

52.63

7000

49.12

6500

45.61

6000

42.10

5500

38.59

5000

35.09

4500

31.58

4000

28.07

3500

24.56

3000

21.05

2500

17.54

2000

14.03

1500

10.53

1000

7.02

Tire Dia.: 26.13
1st Gear Ratio: 3.66
Final Gear Ratio: 3.73

Engine
Speed
(RPM)

Wheel
Speed
(MPH)

8000

45.55

7500

42.71

7000

39.86

6500

37.01

6000

34.17

5500

31.32

5000

28.47

4500

25.62

4000

22.78

3500

19.93

3000

17.08

2500

14.24

2000

11.39

1500

8.54

1000

5.69

Tire Dia.: 28.0
1st Gear Ratio: 3.66
Final Gear Ratio: 3.73

Engine
Speed
(RPM)

Wheel
Speed
(MPH)

8000

48.81

7500

45.76

7000

42.71

6500

39.66

6000

36.61

5500

33.56

5000

30.51

4500

27.46

4000

24.41

3500

21.36

3000

18.31

2500

15.25

2000

12.20

1500

9.15

1000

6.10

Tire Dia.: 28.0
1st Gear Ratio: 2.61
Final Gear Ratio: 3.73

Engine
Speed
(RPM)

Wheel
Speed
(MPH)

8000

68.45

7500

64.17

7000

59.90

6500

55.62

6000

51.34

5500

47.06

5000

42.78

4500

38.50

4000

34.23

3500

29.95

3000

25.67

2500

21.39

2000

17.11

1500

12.83

1000

8.56

Tire Dia.: 28.0
1st Gear Ratio: 2.66
Final Gear Ratio: 3.73

Engine
Speed
(RPM)

Wheel
Speed
(MPH)

8000

67.17

7500

62.97

7000

58.77

6500

54.57

6000

50.37

5500

46.18

5000

41.98

4500

37.78

4000

33.58

3500

29.38

3000

25.19

2500

20.99

2000

16.79

1500

12.59

1000

8.40

Tire Dia.: 28.0
1st Gear Ratio: 2.97
Final Gear Ratio: 3.73

Engine
Speed
(RPM)

Wheel
Speed
(MPH)

8000

55.86

7500

52.37

7000

48.88

6500

45.38

6000

41.89

5500

38.40

5000

34.91

4500

31.42

4000

27.93

3500

24.44

3000

20.95

2500

17.46

2000

13.96

1500

10.47

1000

6.98

Tire Dia.: 28.0
1st Gear Ratio: 2.66
Final Gear Ratio: 4.10

Engine
Speed
(RPM)

Wheel
Speed
(MPH)

8000

56.74

7500

53.19

7000

49.65

6500

46.10

6000

42.55

5500

39.01

5000

35.46

4500

31.92

4000

28.37

3500

24.82

3000

21.28

2500

17.73

2000

14.18

1500

10.64

1000

7.09

Tire Dia.: 28.0
1st Gear Ratio: 2.66
Final Gear Ratio: 3.55

Engine
Speed
(RPM)

Wheel
Speed
(MPH)

8000

65.53

7500

61.43

7000

57.34

6500

53.24

6000

49.15

5500

45.05

5000

40.96

4500

36.86

4000

32.77

3500

28.67

3000

24.57

2500

20.48

2000

16.38

1500

12.29

1000

8.19

Tuesday, March 16, 2010

LINQ-To-Business: Don’t fight the ORM - keep your BLL

Previously, I posted here about DDD, repositories, ORMs and .NET and how they fit in today's software topology. Well, here we go for round 2.
I want my BLL!

Okay, I said it before. I'll say it again: Until the day comes that an ORM either [1] becomes a silver bullet (heh) or [2] an ORM evolves into a full fledged runtime (like a "lightweight" version of BizTalk), I REFUSE to coerce my business-centric stuff into an ORM for reasons that should be obvious (must I articulate them?). While the ORM (ADO.NET Entities, etc.) is a godsend for DBMS-to-OOPL, I'm still left with a sort of impedance mismatch with my dedicated business layer. I ventured off in search of a solution for this and, after much digging, began to finally get enough to start the soaking process (some call it design). As in my last post, I started a concoction, in concept, that wasn't so bad. The only bothersome part was that it didn't neatly square up with the existing technology. But then I discovered a fresh approach by Randolph Cabral in which the essence of his approach was to actually wrapper the DataContext with a BusinessContext. Another fellow whom which deserves mention is Mike Hadlow on his blog about incorporating and using a generic repository implementation for which, along with a few other similar “IRepository” implementations, I base my design approach around. As you will see later, the generic Repository along with LINQ, succinctly becomes the main pillar. He was the general concept:

  public class NorthwindDataContext : DataContext {
    // ...
  }

  public class CustomerBus {
    public CustomerBus() {
      //Validators.Add(new SimpleDataValidator());
    }

    internal CustomerBus(Customer linqEntity) 
             : this() {
      LinqEntity = linqEntity;
    }

    private Customer _linqEntity = null;

    internal Customer LinqEntity {
      get {
        if (_linqEntity == null)
          _linqEntity = new Customer();

        return _linqEntity;
      }
      set {
        _linqEntity = value;
      }
    }
  }

  public class NorthwindBusinessContext 
                             : IDisposable {
    protected NorthwindDataContext DataContext { 
      get; set; 
    }

    public CustomerBus GetCustomerBusBy(
                            string customerID) {
      var linqEntity = DataContext
                  .GetTable<Customer>()
                .Single(entity => 
                  entity.CustomerID == customerID);
      var ret = new CustomerBus(linqEntity);
      return ret;
    }

    // TODO: Implement other methods here

    #region IDisposable Members

    public void Dispose() {
      DataContext.Dispose();
    }

    #endregion
  }

Figure 1

Admittedly, I had an oh duh moment. It makes sense! The concept is great, for a bunch of reasons:
  • can leverage template-based code generation (e.g. T4/visual studio templates, designers)
  • gives a clean abstraction to underlying ORM technology
  • leverages/reuses many of the core technology constructs
  • has the same familiar feeling and developer experience
  • follows the technology's investment trend/path (i.e. future proof)
LINQ-To-Business, Anyone?
It hit me: Wow, this looks pretty darn close to a LINQ provider! So I said to myself, "No problem, I'll write a LINQ provider. We'll call it LinqToBusiness or LinqToDomain (I prefer the former; I'm old school contemporary :) )." So for my needs, I wanted the following additional things:
  • deferred querying (so that it got optimized all the way down to the data source)
  • a reusable base framework so that I only have to write it once
  • a simple model mapping scheme to make it designable
  • business objects to be completely persistence ignorant (no DAO or ActiveRecord injection)
  • support out-of-the-box support for generic repositories that can be wrapped by declarative repositories
  • have the same semantics and interface feeling as other LINQ providers
  • support Updateable/Observable LINQ extensions
Revisiting the Case
As a common case, I still have a profound desire to have a dedicated business layer. With any involved business - especially at a B2B/enterprise level, it isn't uncommon to deal with more than one data source within a given system. Surely many of us know or have experienced that. That is the primary and fundamental rationale for a dedicated business layer. As an illustration, My "Order" entity actually may have many different data sources (not limited to a classic DBMS, mind you; what if it is from an external/federated service?) for each department within a business "domain". The Northwind warehouse probably has some oldie goldie legacy system for the order item picker, purchasing probably has some reference to monitor stock fulfillment, and so on. (Microsoft had (has?) a cool lab demonstrating a true enterprise order system showcasing BizTalk, WCF, WF and so on), complete with a COBOL/CICS legacy product picker (Hello, green and black). Behind my nifty business services layer would be my good old component-oriented business layer wrapped by coarse grained wrappers where needed.

The Need
So in a nutshell, I'm looking to establish a "general purpose" BusinessContext such that I could generate/model it using T4 templates or whatever and independently model the true, sure enough business domain.

The Solution?
Consider the following (needs touch-up to compile, but for concept):


public interface IBusinessRootEntity {  }

public interface IBusinessEntity { string Id { get; set; }
}

public interface IRepository where T : class, IBusinessRootEntity {
// To provide coarse-grained/controlled interface over the // SQO where needed; factor into a different interface.
//IBusinessEntity GetBy(string id);
void Add(T entity);
void Remove(T entity);
}

public class Repository : IRepository, IEnumerable where T : class, IBusinessRootEntity {
BusinessContext biz;
BusinessMappingSource mappingSource;

internal Repository(BusinessContext businessContext, BusinessMappingSource mappingSource) {
biz = businessContext;
this.mappingSource = mappingSource;
BusinessMetaDataModel model = mappingSource.GetModel(typeof(T));
}
}

public class BusinessContext {
//IDictionary modelMap = new Dictionary();
BusinessMappingSource mappingSource;
string ConnectionResource { get; set; } public BusinessContext(string connectionResource) {
mappingSource = new BusinessMappingSource(connectionResource, this.GetType());
ConnectionResource = connectionResource;
// TODO: Model map should be read from the mapping source, given // the optional connectionResource (e.q. config section, resource // manifest name, etc.) //modelMap
}

public virtual Repository GetRepository() where T : class, IBusinessRootEntity {
Repository repo = (Repository)Activator .CreateInstance(typeof(Repository<>) .MakeGenericType(new Type[] { typeof(T) }), BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.Instance, null, new object[] { this, mappingSource }, null);
return repo;
}
}

public class BusinessMappingSource {
private BusinessMetaDataModel model;
public BusinessMappingSource(string uri, Type contextType) {
model = new BusinessMetaDataModel(uri, typeof(TracsDwDataContext), this);
// other stuff here
}

public BusinessMetaDataModel GetModel(Type context) {
return model;
}
// implement other supporting stuff ...
}

public class BusinessMetaDataModel {
private object identity = new object();

// TODO: Add mapping container here and implement supporting mapping provider logic protected BusinessMetaDataModel() { }
internal BusinessMetaDataModel(string connectionSource, Type contextType, BusinessMappingSource mappingSource) {
ConnectionSource = connectionSource;
ContextType = contextType;
MappingSource = mappingSource;
}

public Type ContextType { get; set; }
public string ConnectionSource { get; set; }
internal object Identity { get { return this.identity; } }
public BusinessMappingSource MappingSource { get; }
} 


Now for the declarative, solution-specific model (VERY thin):


public class NorthwindBusinessContext : BusinessContext {
NorthwindDataContext db;

public NorthwindBusinessContext(string connection) : base(connection) { }
// DEVNOTE: Table = data-centric view of an data MVC (i.e. ORM)
// Repository = business-centric view of the business MVC
public Repository Orders { get { return this.GetRepository(); } }
}

And to consume it:


// The context is a unit of work, a MVC and a LINQ-provider all in one.
// Let's make it transactional too, shall we?
using(var biz = new NorthwindBusinessContext()) {
var q = from c in biz.Customers
// NOTE: We're in the business context; no surrogate keys
// visible, unless also is the "business key"
// (Remember, in an OLAP system it may be CustomerKey, in a SQL Server OLTP
// CustomerID and in a oldie COBOL VSAM system CUSTOMER-ID PIC(X)
// noting EBCIDIC character set)
where c.Id == "ALFKI"
select c;

var c = q.Single();
Order o = new Order();
o.OrderID = new Guid();
// set the order ...
// careful, we are in business context now, so should speak
// "business lingo" not data lingo; InsertOnSubmit is wrong lingo.
c.Orders.Add(o);

// mark the unit of work as done and by default commit
biz.Save();
}

With this approach, everything would stay neat and clean and leverage all of the latest facilities. The only real requirement on the consuming developer’s part is specifying the business model and its relationship to the underlying data sources. That’s where the custom LINQ-To-Business query provider comes into play and will save for a later post.

The ultimate goal is to stay within the technology paradigm so that we can squeeze out every last drop of  goodness from the technology environment. Leveraging the power of generics, lambdas, anonymous types, type inferencing, extension methods and of course the LINQ infrastructure, you get the very best experience possible. I’d argue that, of these, probably the most profound of benefits is compile-time support via the type safety features (generics, inferencing) and the query ability via LINQ SQO.

This is only the beginning of this idea, so stay tuned...

Wednesday, March 10, 2010

DDD and the Repository

Repository vs. DAO vs. ORM

So there is all this debate about repositories and sizing the up to other accepted techniques, most common ones being DAOs and ORMs. Out of it have come some pretty heated debates. In all of it, much of it seems to be a battle of egos. Ok, that's not a nice thing to say, I'll take that back. Actually, I think it is a cultural thing. No, really. We have these more classical technologies that are beginning to mix with newer ones. The two cultures are generally quite different. The classicist guys have always done it a certain way and have ground themselves into a profound fondness of the techniques they adopt. Rightfully so, too! It has worked for decades! Then we have the new - let's call them contemporary developers – that are gung-ho and evangelistic about these new techniques and about how its going to solve the world. Well, admittedly, not all are like that thankfully, but you get the idea.

Folks, welcome to software engineering. Rather than take sides, I'm one to stand back and study the scene - look before you leap, right? At the same time, you have to choose something and go with it! Argh, I say! Stuck between a rock and a hard place! Its true that developers often get too feisty and often "protect" their view (too often, I do too), rather than staying humble and searching for a better truth, even if the truth "depends" and wont instantly come within a single conversation or even a few conversations. But once in a while, I meet a developer or two that can do this and are extremely humble yet productive. Sadly, most of them I "met" online and I never had the pleasure to work with them! I totally take my hat off to them and appreciate and respect them greatly. It is very honorable.

Anyway, until recently, I was a data access layer (DAL) kind of guy with total trust in ORM technologies as the "be all" solution. Well almost. Its just that my business logic layer (BLL) was more like a "facade with business logic" than a domain-centric layer. But then I began a new project. Now I have multiple data sources to deal with and a plan for a services layer atop all of this "stuff". It has grown large enough now that testability has become imperative, too.

Hello, repository (or whatever you are)

So here I am, in the DDD marketplace, and I'm going shopping. And I'm NOT buying the first deal I see! Onward. It has been taking me a while to appreciate what this "repository" really is about. I’ll be honest, at first, I just didn’t get it. I thought to myself (still in a data-centric mentality mind you), why on earth would I want to develop a repository when my ORM provides the same things? So I went and read the repository pattern definition by Martin Fowler again … then again … and then one more time. It still didn’t stand out anymore distinctly than a DAO pattern to me. And then the DDD community has a slightly different definition for it which mentions aggregate roots, entity objects, and value objects. This difference of accuracy and perhaps ambiguity in a pattern is what got me and I suspect others too. So some argue that the DAO serves the same essence as the repository and that it is overkill. Others say that the repository even replaces the DAO. And then, a many say that both are needed. Ironically though, everyone seems to agree on the notion of "business objects". However, their exact implementation seems to vary. So rather than arguing with fellow developers or just protecting a view, I've kept my lips sealed and doing a lot of reconnaissance work.

Rant: It often takes a certain amount of soak time for things to be really and deeply realized - I don't care how "smart" you are. Some get right to it and bang it out on the keyboard and beat it into shape, which isn't always a bad thing granted that [1] you already know what you are doing or [2] it is a sort of prototyping or experimental effort that will deliver some kind of surviving value – even if it was an intentional “lessons learned” trial.

What about ORMs? Don't these babies serve the purpose?

Object-relational Mappers (ORMs) have been treated as a godsend and they should be for what they provide. At the same time however, ORMs are still maturing and there are many different styles out there: Hibernate/NHibernate, LLBGenPro, WilsonORM, LinqToSql, ADO.NET Entities to name a few. Each one have different caveats and different feature sets, and every one of them have different interfaces. Some provide caching. Many (but not all) provide model shaping. Most have code generation support. I will point out though that the most common sought after thing is the notion of persistence ignorance - a sort of holy grail for component-oriented developers. Some but not all ORMs fully support this as their mapping scheme sometimes "pollutes" the interface. The Interface Segregation Principle (ISP) from the five SOLID principles introduced by Robert Martin which asserts that all interfaces should remain cleanly factored and define a discrete interface which targets a single theme/concept:

The ISP says that once an interface has gotten too 'fat' it needs to be split into smaller and more specific interfaces so that any clients of the interface will only know about the methods that pertain to them. In a nutshell, no client should be forced to depend on methods it does not use.

Another similar principle that it violates is the High Cohesion Principle (HCP) of the General Responsibility Assignment Software (GRASP) principles, defined as:

High Cohesion is an evaluative pattern that attempts to keep objects appropriately focused, manageable and understandable.

The most typical example of employing these principles is the desire for a clean POCO/POJO model to use across many ORMs or even as the basis for the business layer. Some ORMs allow decorating the class types in the model with attributes, thereby "polluting" the model with technology-specific elements. A few ORM model designer tools even encourage this via making it the default behavior. Until .NET 4.0, ADO.NET Entities worked much this way, though it has always supported external mappings.  NHibernate by default uses external mappings but has support for attribute-based mapping, too. This is perhaps why many still favor NHibernate in addition to its maturity. All said and done, however, the ORM is inherently data-centric. So until the day comes that an ORM evolves to a full fledged modeling framework (for which I would quickly assert that it would then supersede an ORM), using the ORM as a "silver bullet" I'm afraid is smelly for all but relatively simple applications. Ponder a few reasons:

  • Every time the underlying DB changes, the model is subject to change.
  • Violates the ISP of the SOLID patterns because the interface has become too "fat."
  • No out of the box support physically distributed tiers, where a DAL resides on one server, and the BLL resides on another server in the middleware.
  • Little or no support for multiple data sources for a given model.

So a repository is handy after all. What is the missing piece?

I digress. I've come to think of my repository as a "view" of the model. Simple as that. So there is a need for a controller in all this to make it work right. And I don't think the repository should be a controller AND a view. So in my scenario, I've come up with another type to facilitate this. Further, I actually think of my repository in terms of a "resource". I borrow this idea from how the System.Transactions API works in .NET as well as the "transactional programming" paradigm. Unless you've been under a rock, you've noticed all the work and research going on with this: STM.NET, apache.commons.transaction, COM+/Enterprise Services, among others. So for me, all this issue of DAO vs. repository becomes moot as I move much of that logic into a controller. So if I don't have or need a repository, but do have (which in my case I ALWAYS have) a DAO, my controller just uses that instead. Mind you, I use modern ORMs to provide DAO, ActiveRecord, DataMapper stuff. So my "DAO" is what object my ORM provides and in the way it provides it. (No more hand coding DAOs, thank goodness.) I've even gone a step further (in concept at least) to provide this notion of an "agent" over the top of my repository. This allows some degree of autonomy for my domain layer. It can self initialize. It can decide on different repositories (think: "resource") to consume based on parameters or the environment. Maybe I set a test/simulate flag on in a config. file. The agent detects it and can provide "mock" or proxy services for me. And so on.

Actually, if your business objects in your BLL stay proper and true to their purpose, they are inherently a “mock” of the underlying data [objects] anyway! So if you don’t “bind” or attach them to any underlying data source, you have truly transient objects that you can use for testing and simulation. Further, if you implement n-level undo/redo into your business objects, you now have the playback features that are also usable application features.

The other cool thing is that it could be physically distributed in a tiered scenario if needed. And of course, we MUST support SOA friendliness such that an entity-centric service could call into it easily without a whole lot of rework. And let’s not forget security, federation, and governance – very business-centric things. Anyone with practical experience in LOB application/system development know the value of these things. In a distributed environment, most of the old pros would think you’re absolutely insane to have anything otherwise.

Nonetheless, maintaining a separate business layer and adding a controller or agent, I can then reserve the right to evolve the "resources" as I see fit. If I need a transactional one, then I'll build a transactional repository. Or maybe I don't need/have a repository, and I instead have a ORM model, in which case it'd wire into the BeginTrans, CommitTrans, Rollback semantics, complete with transaction promotion. Whether you use an agent approach or controller approach, I think that is actually the missing piece. Cool enough, it fits in almost perfectly with workflow-style technologies, too.

Great, so where do we go from here?

Since all these overly-academic names like repository, unit of work, and such add more confusion than coolness, I like to find more "contemporary" representations that follow the lingua franca of the environment as my actual interfaces. Consider something like this text model:

IRepository 

IUnitOfwork 

IBusinessActivityScope : IUnitOfWork 

ITransactionalBusinessActivityScope 
: IBusinessActivityScope 

IResourceAgent 

BasicBusinessActivityScope: IBusinessActivityScope 

TransactionalBusinessActivityScope 
     : BasicBusinessActivityScope,
		   IBusinessActivityScope, 
			 ITransactionalBusinessActivityScope
			 
DomainResourceAgent : IResourceAgent

ResourceManager

Done this way, you can pick and choose which of those interfaces you actually need to commit to implementing. Then as your system evolves, you already have a bit of a strategy/vision on which way to go. "Model" before you commit to anything. Then determine interface candidates (read: candidate) from your modeling effort and commit to them. You can always add on later but never take out, so get it right.

As a use case, I would like to use something like a IBusinessActivityScope as my controller for the underlying repository. Internally, the controller would be boiled down into a few types: a manager, optional agent(s) or the concrete repository implementations themselves. This removes the unit of work concerns from your repository and now suddenly it becomes very clearly "just a view." Then your domain objects don't actually have to directly know about the repositories even, allowing you to opt-out of a repository in favor of a classic DAO, or ORM directly.

// through indirect injection, its internal
// repository is discovered 
using(var scope = new BusinessActivityScope()) { 
  // alternatively, explicitly inject a repository
  // directive via a one time call
  //scope.Initialize(settings); 
  
  //not the best way, but for the sake of example
  var orderRepo = scope.GetRepository<Order>(); 
  
  Order o = orderRepo.GetById("90210"); 
  o.Quantity += 1; 
  
  //not necessary, but ok to mark a save point
  //scope.Save(); 
  
  // signal to the controller that you are done
  scope.Complete(); 
  
  // Dispose called which internally calls 
  // Save and Complete
}
Figure 1

So with BusinessActivityScope acting as a controller, it handles the workload, resolution, and so forth, letting the repository be simply a view (with implied model constraints and/or optimizations). At that point, it wouldn't be too much of a stretch to generate a base domain model and appropriate repositories from a tool. All the work and coordination is in the controller, which knows about your base interfaces.

In sum, instead of coercing a design into a DAO or into a repository or rebut one or the other, the problem is solved with, yes, yet another pattern! If I see very clearly a model and view, where is the controller?! The answer: Build it!

Thursday, April 9, 2009

Why I Think IQ is Dumb

Actually, the title is sarcasm. You will discover a lot of that from me. What I really mean is that it isn't everything. Argue not; it really isn't. I recently stumbled across a few vids talking about EQ and the workplace. Being a software developer, this becomes quite intriguing indeed. Anyone with experience in IT will tell you the weight put on intelligence. But what is intelligence, exactly? Well, it is a lot more than the speed and efficiency of your brain processes things. Nowadays, EQ is the new "intelligence" sought after. There are way too many geeks out there that think its "cool" to be arrogant, because they are smart. Hey, I am smart too, pal! So its about time software development got healthy and balanced, and let it be a citizen of the industry. Geeks can be smart AND sensitive, can't they? If not then they should be!