Data, Context and Interaction : A New Architectural Approach by James O. Coplien and Trygve Reenskau
https://dl.acm.org/doi/10.1145/2384716.2384782
https://www.artima.com/articles/dci_vision.html
https://www.infoq.com/news/2009/05/dci-coplien-reenskau/
https://www.jianshu.com/p/18d1d582f5c2
James O. Coplien和Trygve Reenskaug在2009年发表了一篇论文《DCI架构:面向对象编程的新构想》,标志着DCI架构模式的诞生。有趣的是James O. Coplien也是MVC架构模式的创造者,这个大叔一辈子就干了两件事,即年轻时创造了MVC和年老时创造了DCI
算法及角色-对象映射由Context拥有。Context“知道”在当前用例中应该找哪个对象去充当实际的演员,然后负责把对象“cast”成场景中的相应角色。(cast 这个词在戏剧界是选角的意思,此处的用词至少符合该词义,另一方面的用意是联想到cast 在某些编程语言类型系统中的含义。)在典型的实现里,每个用例都有其对应的一个Context 对象,而用例涉及到的每个角色在对应的Context 里也都有一个标识符。Context 要做的只是将角色标识符与正确的对象绑定到一起。然后我们只要触发Context里的“开场”角色,代码就会运行下去。
于是我们有了完整的DCI架构(Data、Context和Interactive三层架构):
Data层描述系统有哪些领域概念及其之间的关系,该层专注于领域对象和之间关系的确立,让程序员站在对象的角度思考系统,从而让“系统是什么”更容易被理解。
Context层:是尽可能薄的一层。Context往往被实现得无状态,只是找到合适的role,让role交互起来完成业务逻辑即可。但是简单并不代表不重要,显示化context层正是为人去理解软件业务流程提供切入点和主线。
Interactive层主要体现在对role的建模,role是每个context中复杂的业务逻辑的真正执行者,体现“系统做什么”。Role所做的是对行为进行建模,它联接了context和领域对象。由于系统的行为是复杂且多变的,role使得系统将稳定的领域模型层和多变的系统行为层进行了分离,由role专注于对系统行为进行建模。该层往往关注于系统的可扩展性,更加贴近于软件工程实践,在面向对象中更多的是以类的视角进行思考设计。
James O. Coplien and Trygve Reenskaug have recently published the first article of a series that will introduce the new architectural approach to object oriented programming based on Data, Context and Interaction (DCI) pattern.
In this first article, the authors argue that, though object oriented programming is instrumental for capturing structure, it doesn’t allow fully expressing user mental models because it fails to represent “end user behavioral requirements”. To illustrate what they actually mean by “behavior”, they take an example of a Savings Account object that can, for instance, decrease its balance and do a withdrawal. According to Coplien and Reenskaug, “these two behaviors are radically different”: “decreasing the balance is merely a characteristic of the data: what it is. To do a withdrawal reflects the purpose of the data: what it does”. The fact of being able to reduce balance characterizes data in any situation – it is stable. Withdrawal, on the contrary, involves “interactions with an ATM screen or an audit trail” – it is dynamic, it is no longer about “being” but rather about “doing”.
While the user model naturally combines the being and the doing parts, “there is little in object orientation, and really nothing in MVC, that helps the developer capture doing in the code.” “Object-orientation lumped [these two actions] into the same bucket” making it difficult to separate “simple, stable data models from dynamic behavioral models”, which is though essential from architecture and maintenance perspective. Moreover, pure object orientation requires splitting up large algorithms and distributing their parts – methods - to objects that are most tightly linked with a given method. However, while some algorithm can live within a single object, “interesting business functionality often cuts across objects.”
To represent these dynamic behavioral models, James and Trygve advocate for using the DCI model that is based on three concepts:
The data, that is expressed with domain objects representing the stable parts;
The interactions, expressed in terms of roles that are “collections of behaviors that are about what objects do”;
The context, that can be viewed as “a table that maps a role member function (a row of the table) onto an object method (the table columns are objects). The table is filled in based on programmer-supplied business intelligence in the Context object that knows, for a given Use Case, what objects should play what roles.”
To provide readers with a concrete illustration, the authors use an example of Money transfer Use Case. Even though the transfer would involve the savings account and the investment account, within this precise Use Case, the user will rather reason in terms of “source account” and “destination account”. These are roles and the interactions of Money transfer can be described through their algorithms. These roles can then be played by different objects depending on context: in this precise example a source account role will link to the savings account object.
A general design concept that would allow representing roles in code would be a trait but its implementation would depend on constructs that exist in a given programming language: traits in Scala, Squeak Traits in Squeak Smalltalk, templates in C++, etc… The greatest advantage of this approach is that the example of code provided by the authors is “almost a literal expansion from the Use Case”:
That makes it more understandable than if the logic is spread over many class boundaries that are arbitrary with respect to the natural organization of the logic—as found in the end user mental model.
This article triggered a great number of reactions and critics that allowed James and Trygve to provide some precisions about DCI concept.
Michael Feather and many other commentators argue that assigning the responsibility for transfer to the source account is arbitrary and doesn’t really fit users’ mental model where transfer is not done by either account but rather a bank or “transaction objects which map to the user’s conception of an interaction”. John Zabroski, for instance, suggests using the analysis class TransferSlip. Some other argue that DCI relates to things that people already know : “traits” in some language, “the general idea [of functional programming] that algorithms matter and should be able to be clearly expressed”, etc…
James O. Coplien responds that DCI “tries to reproduce the convenience of algorithmic expression that procedural languages [e.g. Fortran] used to give us combined with many of the good domain modeling notions from 1980s object orientation.” Traits in languages like Scala are a “way of rendering the solution” but different constructs can be used in other languages in order to yield DCI architecture. What counts indeed is not the tool suggested or the example used but the architectural approach of separation between: 1) behavior that is specific to the domain object whatever the situation is, and 2) behavior that is context-specific, that belongs to business logic and often cuts across objects. As Bill Venners puts it, “if the account concept is involved in 10 use cases of your application, you may end up placing some behavior for each of those use cases into class Account” and this is a big challenge for the designer. So letting “an object have a different class in each context” by applying DCI is “an attempt to improve the understandability of OO programs”:
[…] this article points out that sometimes you can end up wanting to put too much [behavior on] objects, and that different subsets of all that behavior may be needed in different contexts. [The authors suggest that] you model that extra stuff in traits, and that the traits would map to roles in the user’s mental model. And then in a particular context, or use case, you add on the traits that you need for that context to the dumb domain objects.
To insist on readability that is yielded by DCI, Coplien points out four reasons why it renders code easier to read and to debug:
1.The context switches across business functions are fewer and more closely follow the mental model (role-based) than the programmer model (domain-based);
Trygve Reenskaug stresses that to understand DCI, one needs to “lift one’s eyes from the class abstraction and open one’s mind to an additional abstraction that applies to such object structures” and “to add an object abstraction that augments the class and that retains object identity”: a role.
0世纪60年代以前,计算机刚刚投入实际使用,软件设计往往只是为了一个特定的应用而在指定的计算机上设计和编制,采用密切依赖于计算机的机器代码或汇编语言,软件的规模比较小,文档资料通常也没有,很少使用系统化的开发方法,设计软件往往等同于编制程序,基本上是自给自足的私人化的软件生产方式。
20世纪60年代中期,大容量、高速度计算机的出现,使得计算机的应用范围迅速扩大,软件开发急剧增长。高级语言逐渐流行(FORTRAN 66),操作系统开始发展(IBMSYS),第一代数据库管理系统慢慢诞生(IMS),软件系统的规模越来越大,复杂程度越来越高,软件可靠性问题也越来越突出。既有自给自足的私人化的软件生产方式不能再满足要求,迫切需要改变,于是软件危机开始爆发,即落后的软件生产方式无法满足迅速增长的计算机软件需求,导致软件的开发与维护出现一系列严重的问题:
软件开发费用和进度失控
软件的可靠性差
生产出来的软件难以维护
1968年北大西洋公约组织的计算机科学家在联邦德国召开国际会议,第一次讨论软件危机问题,并正式提出“软件工程”一词,从此一门新兴的工程学科应运而生。
结构化程序设计
结构化程序设计由迪克斯特拉(E.W.dijkstra)在1969年提出,是以模块化设计为中心,将待开发的软件系统划分为若干个相互独立的模块,这样使完成每一个模块的工作变单纯而明确,为设计一些较大的软件打下了良好的基础。
由于模块相互独立,因此在设计其中一个模块时,不会受到其它模块的牵连,因而可将原来较为复杂的问题化简为一系列简单模块的设计。模块的独立性还为扩充已有的系统和建立新系统带来了不少的方便,因为我们可以充分利用现有的模块作积木式的扩展。
按照结构化程序设计的观点,任何算法功能都可以通过由程序模块组成的三种基本程序结构的组合: 顺序结构、选择结构和循环结构来实现。
结构化程序设计主要表现在一下三个方面:
自顶向下,逐步求精。将编写程序看成是一个逐步演化的过程,将分析问题的过程划分成若干个层次,每一个新的层次都是上一个层次的细化。
模块化。将系统分解成若干个模块,每个模块实现特定的功能,最终的系统由这些模块组装而成,模块之间通过接口传递信息。
语句结构化。在每个模块中只允许出现顺序、分支和循环三种流程结构的语句。
结构化程序设计的概念、方法和支持这些方法的一整套软件工具,构成了结构化革命。这是计算机问世以来对计算机界影响最大的一个软件概念,被称为软件发展中的第三个里程碑,其影响比前两个里程碑(子程序、高级语言)更为深远。
1972年,美国贝尔实验室的D.M.Ritchie在B语言的基础上最终设计出了一种新的语言,他取了BCPL的第二个字母作为这种语言的名字,这就是C语言。1973年初,C语言的主体开发完成,并逐步成为结构化编程语言中最流行的语言。
尼古拉斯沃思(Nicklaus Wirth)教授在编程界提出了一个著名的公式:程序 = 数据结构 + 算法
结构化程序设计是用计算机的思维方式去处理问题,将数据结构和算法分离。数据结构描述待处理数据的组织形式,而算法描述具体的操作过程。我们用函数把这些算法一步一步的实现,使用的时候一个一个的依次调用就可以了。
说明:“面向过程”这个词是在“面向对象”出现之后为与之相对而提出的,它可以看作是“结构化”的别名。
面向对象程序设计
面对日趋复杂的软件系统,结构化程序设计在下面几个方面逐渐暴露出了一些弱点:
审视问题域的视角。在现实世界中存在的客体是问题域中的主角,所谓客体是指客观存在的对象实体和主观抽象的概念,它是人类观察问题和解决问题的主要目标。例如,对于一个学校学生管理系统来说,无论是简单还是复杂,始终是围绕学生和老师这两个客体实施。结构化设计方法所采用的设计思路不是将客体作为一个整体,而是将依附于客体之上的行为抽取出来,以功能为目标来设计构造应用系统。这种做法导致在进行程序设计的时候,不得不将客体所构成的现实世界映射到由功能模块组成的解空间中,这种变换过程,不仅增加了程序设计的复杂程度,而且背离了人们观察问题和解决问题的基本思路。另外,再仔细思考会发现,在任何一个问题域中,客体是稳定的,而行为是不稳定的。例如,不管是国家图书馆,还是学校图书馆,还是国际图书馆,都会含有图书这个客体,但管理图书的方法可能是截然不同的。结构化设计方法将审视问题的视角定位于不稳定的操作之上,并将描述客体的属性和行为分开,使得应用程序的日后维护和扩展相当困难,甚至一个微小的变动,都会波及到整个系统。面对问题规模的日趋扩大、环境的日趋复杂、需求变化的日趋加快,将利用计算机解决问题的基本方法统一到人类解决问题的习惯方法之上,彻底改变软件设计方法与人类解决问题的常规方式扭曲的现象迫在眉睫,这是提出面向对象的首要原因。
抽象级别。抽象是人类解决问题的基本法宝。良好的抽象策略可以控制问题的复杂程度,增强系统的通用性和可扩展性。抽象主要包括过程抽象和数据抽象。结构化设计方法应用的是过程抽象。所谓过程抽象是将问题域中具有明确功能定义的操作抽取出来,并将其作为一个实体看待。这种抽象级别对于软件系统结构的设计显得有些武断,并且稳定性差,导致很难准确无误地设计出系统的每一个操作环节。一旦某个客体属性的表示方式发生了变化,就有可能牵扯到已有系统的很多部分。而数据抽象是较过程抽象更高级别的抽象方式,将描述客体的属性和行为绑定在一起,实现统一的抽象,从而达到对现实世界客体的真正模拟。
封装体。封装是指将现实世界中存在的某个客体的属性与行为绑定在一起,并放置在一个逻辑单元内。结构化设计方法没有做到客体的整体封装,只是封装了各个功能模块,而每个功能模块可以随意地对没有保护能力客体属性实施操作,并且由于描述属性的数据与行为被分割开来,所以一旦某个客体属性的表达方式发生了变化,就有可能对整个系统产生影响。
可重用性。可重用性标识着软件产品的可复用能力,是衡量一个软件产品成功与否的重要标志。结构化程序设计方法的基本单位是模块,每个模块只是实现特定功能的过程描述,因此,它的可重用单位只能是模块。但对于当前的软件开发来说,这样的重用力度显得微不足道,而且当参与操作的某些数据类型发生变化时,就不能够再使用那些函数了。因此,渴望更大力度的可重用构件是如今应用领域对软件开发提出的新需求。
上述的三个弱点驱使人们寻求一种新的程序设计方法,以适应当代社会对软件开发的更高要求,面向对象由此产生。面向对象技术强调在软件开发过程中面向客观世界或问题域中的事物,采用人类在认识客观世界的过程中普遍运用的思维方法,直观、自然地描述客观世界中的有关事物。面向对象技术的基本特征主要有抽象性、封装性、继承性和多态性。
20世纪80年代,面向对象的程序设计思想开始在业界大行其道,逐渐成为主流。而C++(1983)恰好在这个时期诞生,自然而然地,C++就选择了支持面向对象程序设计的思想。
面向对象是一种思想,它让我们在分析和解决问题时,把思维和重点转向现实中的客体中来,然后通过UML等工具理清这些客体之间的联系,最后用面向对象的语言实现这种客体以及客体之间的联系。它分为面向对象的分析(OOA)、面向对象的设计(OOD)和面向对象的编程实现(OOP)三个大的步骤:
首先是分析需求,先不要思考怎么用程序实现它,先分析需求中稳定不变的客体都是些什么,这些客体之间的关系是什么;
把第一步分析出来的需求,通过进一步扩充模型,变成可实现的、符合成本的、模块化的、低耦合高内聚的模型;
使用面向对象的实现模型。
当我们习惯了面向过程(结构化)编程时,发现在程序过程中到处找不到需要面向对象的地方,最主要的原因,是思维没有转变。程序员通常在拿到一个需求的时候,第一个反应就是如何实现这个需求,这是典型的面向过程的思维方式,而且可能很快就实现了它。而面向对象,面对的却是客体,第一步不是考虑如何实现需求,而是进行需求分析,就是根据需求找到其中的客体,再找到这些客体之间的联系。因此面向过程和面向对象的思维转变的关键点,就是在第一步设计,拿到需求后,一定先不要考虑如何实现它,而是通过UML建模,然后按照UML模型去实现它。这种思路的转变,可能需要个过程。
设计模式
设计面向对象的软件比较困难,而设计可复用的面向对象的软件就更加困难。必须找到相关的对象,以适当的粒度将它们归类,再定义类的接口和继承层次,建立对象之间的基本关系。有经验的面向对象设计者的确能做出良好的设计,而新手则面对众多选择无从下手,总是求助于以前使用过的非面向对象技术。新手需要花费较长时间领会良好的面向对象设计是怎么回事,而有经验的设计者显然知道一些新手不知道的东西,这又是什么呢?
内行的设计者知道,不是解决任何问题都要从头做起,他们更愿意复用以前使用过的解决方案。当找到一个好的解决方案,他们会一遍又一遍地使用。这些经验是他们成为内行的部分原因。
GoF将模式的概念引入软件工程领域,这标志着软件模式的诞生。软件模式并非仅限于设计模式,还包括架构模式、分析模式和过程模式等。实际上,在软件开发生命周期的每一个阶段都存在着一些被认同的模式。软件模式与具体的应用领域无关,也就是说无论从事的是移动开发、桌面开发、Web开发还是嵌入式软件的开发,都可以使用软件模式。
在软件模式中,设计模式是研究最为深入的分支,它融合了众多专家的设计经验,已经在成千上万的软件中得以应用。1995年,GoF将收集和整理好的23种设计模式汇编成了一本名叫《设计模式》的书,该书的出版也标志着设计模式时代的到来。这些模式解决特定的设计问题,使面向对象设计更灵活和优雅,最终复用性更好。他们帮助设计者将新的设计建立在以往工作的基础上,复用以往成功的设计方案。一个熟悉这些模式的设计者不需要再去发现它们,而能够立即将它们应用于设计问题中。
设计模式使人们可以更加简单方便地复用成功的设计和体系结构,将已证实的技术表述成设计模式也会使新系统开发者更加容易理解其设计思路。设计模式帮助你做出有利于系统复用的选择,避免设计损害了系统的复用性。简而言之,设计模式可以帮助设计者更快更好地完成系统设计。
守破离是武术中一种渐进的学习方法:
第一步——守,遵守规则直到充分理解规则并将其视为习惯性的事。
第二步——破,对规则进行反思,寻找规则的例外并“打破”规则。
第三步——离,在精通规则之后就会基本脱离规则,抓住其精髓和深层能量。
设计模式的学习也是一个守破离的过程:
第一步——守,在设计和应用中模仿既有设计模式,在模仿中要学会思考。
第二步——破,熟练使用基本设计模式后,创造新的设计模式。
第三步——离,忘记所有设计模式,在设计和应用中潜移默化的使用。
当然,如果你不学设计模式,你可能也在无意识的使用一些设计模式,但是这个在跟学过以后再无意识的使用设计模式,应该隔着两重境界吧?
设计原则
我们生活在一个充满规则的世界里,在复杂多变的外表下,万事万物都被永恒的真理支配并有规律的运行着。设计模式也是一样,不论那种设计模式,其背后都潜藏着一些“永恒的真理”,这个真理就是设计原则。的确,还有什么比原则更重要呢?就像人的世界观和人生观一样,那才是支配你一切行为的根本。对于设计模式来说,为什么这个模式是这样解决这个问题,而另一个模式却是那样解决这个问题,它们背后都遵循的就是设计原则。可以说,设计原则是设计模式的灵魂。
对于面向对象软件系统的设计而言,在支持可维护性的同时,提高系统的可复用性是一个至关重要的问题,如何同时提高一个软件系统的可维护性和可复用性是面向对象设计需要解决的核心问题之一。在面向对象设计中,可维护性的复用是以设计原则为基础的。每一个原则都蕴含一些面向对象设计的思想,可以从不同的角度提升一个软件结构的设计水平。
面向对象设计原则是对面向对象思想的提炼,它比面向对象思想的核心要素(封装、继承和多态)更具可操作性,但与设计模式相比,却又更加的抽象。形象的讲,面向对象思想类似法理的精神,设计原则类似基本宪法,而设计模式就好比各式各样的具体法律条文。面向对象设计原则是我们用于评价一个设计模式的使用效果的重要指标之一,比如我们在设计模式的学习中,经常会看到诸如“XXX模式符合YYY原则”、“XXX模式违反了ZZZ原则”这样的语句。
对于设计原则,比如SOLID原则和迪米特法则,大家都能耳熟能详,但大多数人对它们的理解都不太深入,笔者建议初学者精读Robert C. Martin在2002年的经典著作《敏捷软件开发—原则、模式与实践》。
领域驱动设计
一直以来,我们按照传统的方式开发软件,如下图所示:
tranditional-method.png
分析模型和设计模型的分离,会导致分析师头脑中的业务模型和设计师头脑中的业务模型不一致,通常要映射一下。伴随着重构和bug fix的进行,设计模型不断演进,和分析模型的差异越来越大。有些时候,分析师站在分析模型的角度认为某个需求较容易实现,而设计师站在设计模型的角度认为该需求较难实现,那么双方都很难理解对方的模型。长此以往,在分析模型和设计模型之间就会存在致命的隔阂,从任何活动中获得的知识都无法提供给另一方。
Eric Evans在2004年出版了领域驱动设计(DDD, Domain-Driven Design)的开山之作《领域驱动设计——软件核心复杂性应对之道》,抛弃将分析模型与设计模型分离的做法,寻找单个模型来满足两方面的要求,这就是领域模型。许多系统的真正复杂之处不在于技术,而在于领域本身,在于业务用户及其执行的业务活动。如果在设计时没有获得对领域的深刻理解,没有通过模型将复杂的领域逻辑以模型概念和模型元素的形式清晰地表达出来,那么无论我们使用多么先进、多么流行的平台和设施,都难以保证项目的真正成功。
领域驱动设计分为两个阶段:
以一种领域专家、设计人员和开发人员都能理解的通用语言作为相互交流的工具,在交流的过程中发现领域概念,然后将这些概念设计成一个领域模型;
由领域模型驱动软件设计,用代码来表达该领域模型。
由此可见,领域驱动设计的核心是建立正确的领域模型。
领域专家、设计人员和开发人员一起创建一套适用于领域建模的通用语言,通用语言必须在团队范围内达成一致。所有成员都使用通用语言进行交流,每个人都能听懂别人在说什么,通用语言也是对软件模型的直接反映。领域专家、设计人员和开发人员一起工作,这样开发出来的软件能够准确的表达业务规则。领域模型基于通用语言,是关于某个特定业务领域的软件模型,如下图所示:
domain-model.png
一个通用领域驱动设计的架构性解决方案包含四个概念层,就是经典的四层模型,如下图所示:
ddd-layer.png
User Interface为用户界面/展现层,负责向用户展现信息以及解释用户命令。
Application为应用层,是很薄的一层,定义软件要完成的所有任务。对外为展现层提供各种应用功能(包括查询或命令),对内调用领域层(领域对象或领域服务)完成各种业务逻辑,应用层不包含业务逻辑。
Domain为领域层,负责表达业务概念,业务状态信息以及业务规则,领域模型处于这一层,是业务软件的核心。
Infrastructure层为基础实施层,向其他层提供通用的技术能力;提供了层间的通信;为领域层实现持久化机制;总之,基础设施层可以通过架构和框架来支持其他层的技术需求。
DCI架构模式
James O. Coplien和Trygve Reenskaug在2009年发表了一篇论文《DCI架构:面向对象编程的新构想》,标志着DCI架构模式的诞生。有趣的是James O. Coplien也是MVC架构模式的创造者,这个大叔一辈子就干了两件事,即年轻时创造了MVC和年老时创造了DCI,其他时间都在思考,让我辈望尘莫及。
面向对象编程的本意是将程序员与用户的视角统一于计算机代码之中:对提高可用性和降低程序的理解难度来说,都是一种恩赐。可是虽然对象很好地反映了结构,但在反映系统的动作方面却失败了,DCI的构想是期望反映出最终用户的认知模型中的角色以及角色之间的交互。
传统上,面向对象编程语言拿不出办法去捕捉对象之间的协作,反映不了协作中往来的算法。就像对象的实例反映出领域结构一样,对象的协作与交互同样是有结构的。协作与交互也是最终用户心智模型的组成部分,但你在代码中找不到一个内聚的表现形式去代表它们。在本质上,角色体现的是一般化的、抽象的算法。角色
没有血肉,并不能做实际的事情,归根结底工作还是落在对象的头上,而对象本身还担负着体现领域模型的责任。
人们心目中对“对象”这个统一的整体却有两种不同的模型,即“系统是什么”和“系统做什么”,这就是DCI要解决的根本问题。用户认知一个个对象和它们所代表的领域,而每个对象还必须按照用户心目中的交互模型去实现一些行为,通过它在用例中所扮演的角色与其他对象联结在一起。正因为最终用户能把两种视角合为一体,类的对象除了支持所属类的成员函数,还可以执行所扮演角色的成员函数,就好像那些函数属于对象本身一样。换句话说,我们希望把角色的逻辑注入到对象,让这些逻辑成为对象的一部分,而其地位却丝毫不弱于对象初始化时从类所得到的方法。我们在编译时就为对象安排好了扮演角色时可能需要的所有逻辑。如果我们再聪明一点,在运行时知道了被分配的角色,才注入刚好要用到的逻辑,也是可以做到的。
算法及角色-对象映射由Context拥有。Context“知道”在当前用例中应该找哪个对象去充当实际的演员,然后负责把对象“cast”成场景中的相应角色。(cast 这个词在戏剧界是选角的意思,此处的用词至少符合该词义,另一方面的用意是联想到cast 在某些编程语言类型系统中的含义。)在典型的实现里,每个用例都有其对应的一个Context 对象,而用例涉及到的每个角色在对应的Context 里也都有一个标识符。Context 要做的只是将角色标识符与正确的对象绑定到一起。然后我们只要触发Context里的“开场”角色,代码就会运行下去。
于是我们有了完整的DCI架构(Data、Context和Interactive三层架构):
Data层描述系统有哪些领域概念及其之间的关系,该层专注于领域对象和之间关系的确立,让程序员站在对象的角度思考系统,从而让“系统是什么”更容易被理解。
Context层:是尽可能薄的一层。Context往往被实现得无状态,只是找到合适的role,让role交互起来完成业务逻辑即可。但是简单并不代表不重要,显示化context层正是为人去理解软件业务流程提供切入点和主线。
Interactive层主要体现在对role的建模,role是每个context中复杂的业务逻辑的真正执行者,体现“系统做什么”。Role所做的是对行为进行建模,它联接了context和领域对象。由于系统的行为是复杂且多变的,role使得系统将稳定的领域模型层和多变的系统行为层进行了分离,由role专注于对系统行为进行建模。该层往往关注于系统的可扩展性,更加贴近于软件工程实践,在面向对象中更多的是以类的视角进行思考设计。
DCI目前广泛被作为对DDD的一种发展和补充,用于基于面向对象的领域建模。显示的对role进行建模,解决了面向对象建模中充血和贫血模型之争。DCI通过显示的用role对行为进行建模,同时让role在context中可以和对应的领域对象进行绑定(cast),从而既解决了数据边界和行为边界不一致的问题,也解决了领域对象中数据和行为高内聚低耦合的问题。
面向对象建模面临的一个棘手问题是数据边界和行为边界往往不一致。遵循模块化的思想,我们通过类将行为和其紧密耦合的数据封装在一起。但是在复杂的业务场景下,行为往往跨越多个领域对象,这样的行为放在某一个对象中必然导致别的对象需要向该对象暴漏其内部状态。所以面向对象发展的后来,领域建模出现两种派别之争,一种倾向于将跨越多个领域对象的行为建模在领域服务中。这种做法使用过度经常导致领域对象变成只提供一堆get方法的哑对象,这种建模导致的结果被称之为贫血模型。而另一派则坚定的认为方法应该属于领域对象,所以所有的业务行为仍然被放在领域对象中,这样导致领域对象随着支持的业务场景变多而变成上帝类,而且类内部方法的抽象层次很难一致。另外由于行为边界很难恰当,导致对象之间数据访问关系也比较复杂。这种建模导致的结果被称之为充血模型。
DCI和袁英杰大师提出的“小类大对象”殊途同归,即类应该是小的,对象应该是大的。上帝类是糟糕的,但上帝对象却恰恰是我们所期盼的。而从类到对象,是一种多对一的关系:最终一个对象是由诸多单一职责的小类——它们分别都可以有自己的数据和行为——所构成。而将类映射到对象的过程,在Ruby中通过Mixin;在Scala中则通过Traits;而C++则通过多重继承。
举个生活中的例子:
人有多重角色,不同的角色履行的职责不同:
作为父母:我们要给孩子讲故事,陪他们玩游戏,哄它们睡觉;
作为子女:我们则要孝敬父母,听取他们的人生建议;
作为下属:在老板面前,我们需要听从其工作安排;
作为上司:需要安排下属工作,并进行培养和激励;
…
这里人(大对象)聚合了多个角色(小类),在某种场景下,只能扮演特定的角色:
在孩子面前,我们是父母;
在父母面前,我们是子女;
职场上,在上司面前,我们是下属;
在下属面前,你是上司
…
对于通信系统软件,没有UI层,应用层也很薄,所以传统的DDD的四层模型并不适用。DCI提出后,针对通信系统软件,我们将DDD的分层架构重新定义一下,如下图所示:
ddd-layer-with-dci.png
Schedule是调度层,维护UE的状态模型,除过业务本质状态,还有实现状态。当调度层收到消息后,将委托Context层的Action进行处理。
Context是环境层(对应DCI中的Context),以Action为单位,处理一条同步消息或异步消息,将Domain层的领域对象cast成合适的role,让role交互起来完成业务逻辑。
Domain层定义领域模型,不仅包括领域对象及其之间关系的建模(对应DCI中的Data),还包括对象的角色role的显式建模(对应DCI中的Interaction)。
Infrastructure层为基础实施层,为其他层提供通用的技术能力;提供了层间的通信;为领域层实现持久化机制;总之,基础设施层可以通过架构和框架来支持其他层的技术需求。
领域专用语言
DSL(Domain Specific Language)一般译作领域专用语言或领域特定语言,故名思义,是针对某个特定领域而开发的语言。像我们平时接触到的C、C++和Java等都属于通用语言,可以为各个领域编程,虽然通用性有余,但针对性不强,所以DSL是为了弥补通用语言的这个劣势而出现的。
软件开发“教父”Martin Fowler在2010出版的《领域特定语言》是DSL领域的丰碑之作,掀起来DSL编程的热潮。DSL其实并没有那么神秘。实际上,在平时的面向对象的编程中,大家会自觉不自觉的使用DSL的一些方法和技巧。比如,如果我们定义了一些非常面向业务的函数,然后这些函数的集合就可以被看作一种DSL了。虽然DSL和面向业务的函数之间是有一些类似之处,但这只是问题的一个方面,DSL更多是从客户的角度出发看待代码,定义函数则更多的从解决问题的方案的角度看待代码。诚然两者都有交集,但是出发点却截然不同。
按照Martin Fowler的看法,DSL可以分为两种基本类型,即内部DSL和外部DSL。顾名思义,外部DSL就相当于实现一种编程语言,也许不如实现一门通用语言那么复杂,但是工作量不小;内部DSL就是在一种通用编程语言的基础上进行关键字的定义封装来达到DSL的目的,这种DSL的扩展性可能会受到母语言的影响,对于不熟悉母语言的人来说可能不是那么好理解,不过好处就是你可以利用母语言本身的功能。
袁英杰大师原创的transaction DSL是一种内部DSL(it is C++),用于降低业务的实现复杂度,使得调度层只需处理业务的本质状态,而所有非稳态都是原子的事务过程,如下图所示:
transaction-dsl.png
有了transaction DSL之后,针对通信系统软件的DDD四层模型可以演进为五层模型,如下图所示:
ddd-layer-with-dci-dsl.png
Schedule是调度层,维护UE的状态模型,只包括业务的本质状态,将接收到的消息派发给transaction DSL层。
transaction DSL是事务层,对应一个业务流程,比如UE Attach,将各个同步消息或异步消息的处理组合成一个事务,当事务失败时,进行回滚。当事务层收到调度层的消息后,委托环境层的Action进行处理。
Context是环境层(对应DCI中的Context),以Action为单位,处理一条同步消息或异步消息,将Domain层的领域对象cast成合适的role,让role交互起来完成业务逻辑。
Domain层定义领域模型,不仅包括领域对象及其之间关系的建模(对应DCI中的Data),还包括对象的角色role的显式建模(对应DCI中的Interaction)。
Infrastructure层为基础实施层,为其他层提供通用的技术能力;提供了层间的通信;为领域层实现持久化机制;总之,基础设施层可以通过架构和框架来支持其他层的技术需求。
微服务架构模式
软件“教父”Martin Fowler在2012年提出微服务这一概念,于是出现了两种服务架构模式,即单体架构模式和微服务架构模式,如下图所示:
monoliths-and-microservices.png
微服务是指开发一个单个小型的但有业务功能的服务,可以选择自己的技术栈和数据库,可以选择自己的通讯机制,可以部署在单个或多个服务器上。这里的“微”不是针对代码行数而言,而是说服务的范围不能大于DDD中的一个BC(Bounded Context,限界上下文)。
微服务架构模式的优点:
微服务只关注一个BC,业务简单
不同微服务可由不同团队开发
微服务是松散耦合的
每个微服务可选择不同的编程语言和工具开发
每个微服务可根据业务逻辑和负荷选择一个最合适的数据库
微服务架构模式的挑战:
分布式系统的复杂性,比如事务一致性、网络延迟、容错、对象持久化、消息序列化、异步、版本控制和负载等
更多的服务意味着更高水平的DevOps和自动化技术
服务接口修改会波及相关的所有服务
服务间可能存在重复的功能点
测试更加困难
尽管微服务架构模式对“个子”的要求比较高,但随着容器云技术的不断成熟,微服务架构模式却越来越火,似乎所有系统的架构都在尽情拥抱微服务,这是不是意味着单体架构模式不再是我们的选择了呢?笔者认为需要根据具体情况而定,我们看看下面这张图:
microservice-premium.png
上图直观的说明了单体架构和微服务架构在不同系统复杂度下不同的生产力,以及两者的对比关系。对于那种需要快速为商业模式提供验证的系统,在其功能较少和用户量较低的情况下,单体架构模式是更好的选择,但在单体架构内部,需要清晰的划分功能模块,尽量做到高内聚低耦合。
总而言之,微服务架构有很多吸引人的地方,不过在拥抱微服务之前要认清它所带来的挑战。每一种架构模式都有其优缺点,我们需要根据项目和团队的实际情况来选择最合适的架构模式。
六边形架构模式的演变
尽管六边形架构模式已经很好,但是没有最好只有更好,演变没有尽头。在六边形架构模式提出后的这些年,又依次衍生出三种六边形架构模式的变体,感兴趣的读者可以点击链接自行学习:
Jeffrey Palermo在2008年提出了洋葱架构,六边形架构是洋葱架构的一个超集。
Robert C. Martin在2012年提出了干净架构(Clean Architecture),这是六边形架构的一个变体。
Russ Miles在2013年提出了Life Preserver设计,这是一种基于六边形架构的设计。
小结
本文较为详细的阐述了软件设计的演变过程,包括结构化程序设计、面向对象程序设计、设计模式、设计原则、DDD、DCI、DSL和微服务架构模式,通过对这些设计思想的全面梳理,可以帮助我们做出更好的设计决策。
James O. Coplien和Trygve Reenskaug最近发表了系列文章的第一篇,该系列文章介绍一种面向对象编程的新架构方法,该方法基于DCI(数据、上下文和交互)模式。
作者在第一篇文章里主张,即便面向对象编程有助于描述结构,但它并不能充分表明用户的心智模式,因为它不能表示“最终用户的行为需求”。为了说明“行为”到底指什么,他们举了一个储蓄账户对象的例子,该对象可以减少余额,也能提款。按照Coplien和Reenskaug的意思,“这两种行为完全不同”:“减少余额只是数据的特征:它是什么。提款则反映了数据的目的:它做了什么。”事实上,能减少余额在任何情况下都是以数据为特征的——这一点是不变的。相反,提款则涉及“与ATM或审计跟踪之间的交互”——这是动态的,不再关乎“是什么”,而关乎“做什么”。
虽然用户模型很自然地结合了是什么和做什么这两个部分,但“在面向对象里很少有内容会帮助开发人员在代码中描述做什么,在MVC里则根本就没有。”“面向对象将这两部分混在一起”,很难把“简单、不变的数据模型和动态的行为模型区分开来”,但从架构和维护的角度来说,把两者分清楚可是很必要的。另外,纯粹的面向对象要求对大型算法进行分割,将各细分部分(也就是方法)分布到对象中,对象与给定的方法有着紧密的联系。即使有些算法能存在于单个对象中,“有意思的业务功能往往还是会跨多个对象。”
James和Trygve提倡用DCI模型描述这些动态的行为模型,DCI模型基于以下三个概念:
——数据,描述领域对象,表示不变的部分;
——交互,以角色的语言去描述,角色是“对象能进行的行为的集合”;
——上下文,可看成是“一个表格,将角色的一个功能(表格的一行)映射到一个对象方法上(表格的列是对象)。表格根据Context对象中程序员提供的业务智能进行填充,对给定的用例来说,Context知道什么对象应该扮演什么角色。”
作者用转账用例给读者做了具体的说明。转账会涉及储蓄账户和投资账户,但用户在讨论这个用例的时候更喜欢用“源账户”和“目的账户”。这全都是角色,而转账的交互可以用角色的算法描述。上下文不同,角色可以由不同的对象去扮演:在这个具体的例子中,源账户角色将关联到储蓄账户对象上。
代码中能表示角色的一般设计概念是Trait,但Trait的实现依赖于特定编程语言中存在的结构:Scala中的Traits、Squeak Smalltalk中的Squeak Traits、C++中的模板等……DCI方法的最大好处是,作者提供的示例代码“几乎就是用例描述文字的自然展开”:
比起让逻辑主观任意地跨越许多类的边界,DCI将更容易理解——因为更符合最终用户的心智模型中逻辑的自然组织。
这篇文章引发了很多评论和批评,让James和Trygve提供更多精确的DCI概念。
Michael Feather和很多其他评论者认为,把转账的任务指派给源账户太随意了,这实际上并不符合用户的心智模型,在用户看来,转账并不是由哪个账户来完成的,而是由银行或交易对象完成,“交易对象映射到交互的用户概念”。比如说,John Zabroski建议用分析类TransferSlip。另一些人则认为,DCI所涉及的正是人们早已知道的内容:某些语言中的“Traits”,“函数编程的一般思想(算法比较紧要、应该能表达清楚)”,等等……
James O. Coplien回应道,DCI“试图结合二十世纪八十年代面向对象里很多很好的领域建模概念,重现过程语言(比如Fortran)过去用算法表达给我们带来的方便性”。像Scala这些语言中的Traits是一种实现方法,但在其它语言里可以用不同的结构去完成DCI架构。实际上,重要的并不是文中建议的工具或使用的例子,而是区分以下内容的架构方法:1)任何情况下都专属于领域对象的行为;2)特定于上下文的行为,这种行为属于业务逻辑,往往跨越多个对象。正如Bill Venners所说,“如果应用的十个用例都涉及账户概念,结果可能每个用例都有一些行为进入了Account类”,这对设计者来说也是个很大的挑战。所以,运用DCI让“对象在每个上下文中有一个不同的类”是“改善OO程序可理解性的一种尝试”:
[……]这篇文章指出,有时候你可能在对象中放置了太多的行为,因为在不同上下文中要用到这些行为的不同子集。作者建议用Traits对额外的内容进行建模,Traits可以映射到用户心智模型的任务上去。然后在某个特定的上下文或用例中,再往基本的领域对象中添加上下文需要的Traits。
Coplien指出了DCI使代码更易于阅读和调试的四个理由,以强调DCI带来的可读性:
跨业务功能的上下文切换比较少,而且更接近心智模型(基于角色),而不是程序员模型(基于领域);
包含多态(inclusion polymorphism)几乎完全消失了。调用Foo就得到Foo:而不是子类型化层次中派生出的众多Foo里的一个。
我可以找到有业务价值的测试点:也就是说,我可以做到真正的BDD(行为驱动开发)。这更易于开发测试用例,以支持调试。
不太需要进行运行时调试,因为代码在编译时就更加易读。
Trygve Reenskaug强调说,要理解DCI,需要“减少对类抽象的关注,勇于接受更适用于以上对象结构的额外抽象”,还需要“增加一种既能扩充类,又仍然保持对象身份的对象抽象”,即角色。
The DCI Architecture: A New Vision of Object-Oriented Programming
by Trygve Reenskaug and James O. Coplien
March 20, 2009
ADVERTISEMENT
Summary
Object-oriented programming was supposed to unify the perspectives of the programmer and the end user in computer code: a boon both to usability and program comprehension. While objects capture structure well, they fail to capture system action. DCI is a vision to capture the end user cognitive model of roles and interactions between them.
Objects are principally about people and their mental models—not polymorphism, coupling and cohesion
Object oriented programming grew out of Doug Englebart’s vision of the computer as an extension of the human mind. Alan Kay’s Dynabook vision,1 often regarded as the progenitor of modern personal lap-tops, was perhaps the epitome of this vision: a truly personal computer that was almost a companion, an extension of self. He would later create a language, Smalltalk, to carry that vision into the very source code. In fact, the goal of object-oriented programming pioneers was to capture end user mental models in the code. Today we are left with the legacy of these visions in the blossoming of interactive graphical interfaces and the domination of object-oriented languages in programming world-wide.
When a user approaches a GUI, he or she does two things: thinking and doing. For a smooth interaction between man and machine, the computer’s “mental” model (also the programmer’s mental model) and the end user’s mental model must align with each other in kind of mind-meld. In the end, any work that users do on their side of the interface manipulates the objects in the code. If the program provides accurate real-time feedback about how user manipulations affect program state, it reduces user errors and surprises. A good GUI provides this service. Using an interactive program is like being a doctor trying to navigate a probe through a patient’s bronchial tubes: just as you can’t see the objects in program memory, you can’t see the actual probe in the patient’s body. You need some external representation of the program structure, or of the bronchial probe, to guide your interaction with a program.
We’ve been good at the mind-meld of structure
Both object-oriented design and the Model-View-Controller (MVC) framework grew to support this vision. MVC’s goal was to provide the illusion of a direct connection from the end user brain to the computer “brain”—its memory and processor.
In some interfaces, this correspondence is obvious: if you create a circle on a PowerPoint® slide, the circle in your mind directly maps onto its representation in computer memory. The rows and columns of a spread sheet ledger map onto the screen rows and columns in a spreadsheet program, which in turn map onto the data structures in the program. Words on a text editor page reflect both our model of a written document and the computer’s model of stored text. The object approach to structuring makes such alignment possible, and human thinking quickly aligns with the computer’s notion of structure.
MVC is about people and their mental models—not the Observer pattern
Most programmers think of MVC as a fancy composition of several instances of the Observer pattern. Most programming environments provide MVC base classes that can be extended to synchronize the state of the Model, the View, and the Controller. (Model, View and Controller are actually roles that can be played by the objects that the user provides—we’ll talk more about roles later.) So it’s just a housekeeping technique, right? To think of it that way is to take a nerd’s perspective. We’ll call that perspective “Model-View-Controller.” More deeply, the framework exists to separate the representation of information from user interaction. In that capacity we’ll call it “Model-View-Controller-User,” capturing all four of the important actors at work—MVC-U for short.
It can serve us well to define additional terms more precisely. MVC-U is all about making connections between computer data and stuff in the end user’s head. Data are the representation of information; in computers, we often represent them as bits. But the bits themselves mean nothing by themselves: they mean something only in the mind of the user when there is an interaction between them. The mind of the end user can interpret these data; then they become information. Information is the term we use for interpreted data. Information is a key element of the end user mental model.2
This mapping first takes place as an end user approaches an interactive interface, using it to create the path between the data from which the interface is drawn, and his or her model of the business world. A well-designed program does a good job of capturing the information model in the data model, or at least of providing the illusion of doing so. If the software can do that then the user feels that the computer memory is an extension of his or her memory. If not, then a “translation” process must compensate for the mismatch. It’s at best awkward to do this translation in the code (and it shouldn’t be necessary if the coder knows the end user cognitive models). It is painful, awkward, confusing, and error-prone for the end user to perform this mapping in their head in real time. To unify these two models is called the direct manipulation metaphor: the sense that end users are actually manipulating objects in memory that reflect the images in their head.
Figure 1. Direct Manipulation
The Direct Manipulation Metaphor
We want the system to provide a short path from the information to the data that represents it in the program (Figure 1). The job of the model is to “filter” the raw data so the programmer can think about them in terms of simple cognitive models. For example, a telephone system may have underlying objects that represent the basic building blocks of local telephony called half-calls. (Think about it: if you just had “calls,” then where would the “call” object live if you were making a call between two call centers in two different cities? The concept of a “half-call” solves this problem.) However, a telephone operator thinks of a “call” as a thing, which has a duration and may grow or shrink in the number of parties connected to it over its lifetime. The Model supports this illusion. Through the computer interface the end user feels as though they are directly manipulating a real thing in the system called a “Call.” Other Models may present the same data (of a half-call) in another way, to serve a completely different end user perspective. This illusion of direct manipulation lies at the heart of the object perspective of what computers are and how they serve people.
The View displays the Model on the screen. View provides a simple protocol to pass information to and from the Model. The heart of a View object presents the Model data in one particular way that is of interest to the end user. Different views may support the same data, i.e., the same Models, in completely different ways. The classic example is that one View may show data as a bar graph while another shows the same data as a pie chart.
The Controller creates Views and coordinates Views and Models. It usually takes on the role of interpreting input user gestures, which it receives as keystrokes, locater device data, and other events.
Figure 2. Model-View-Controller-User
Together, these three roles define interactions between the objects that play them—all with the goal of sustaining the illusion that the computer memory is an extension of the end user memory: that computer data reflect that end user cognitive model (Figure 2). That summarizes Model-View-Controller-User: it does a good job of supporting the thinking part of computer/human interaction.
… but in spite of capturing structure, OO fails to capture behavior
Unfortunately, object orientation hasn’t fared so well to capture how we reason about doing. There is no obvious “place” for interactions to live, either on the GUI or in the code. There are exceptions to this rule, particularly for simple actions that involve only a single object. For example, a good interface might allow us to use a well-placed paint brush to change the color of a circle on the screen. In the program, the code for re-coloring the circle is itself part of the circle. In these simple cases the end user mental model, the code, and the screen all align. But for a spreadsheet we can’t see the sum over a column. Instead, we need to invoke some set of mystical incantations to bring up a sub-window or other field that recovers an earlier constructed formula. With appropriate screen design and interaction design we can limit the damage for the end user, and some user interfaces are surprisingly good at making these actions visible. Still, it is far too often that such interfaces are shrouded in mystery. Consider the totally opaque ceremony that takes place in a popular word processor between a picture and a paragraph as you strive to insert one into the other.
As if things aren’t bad enough for the end user, they are as bad or even worse for the programmer. Programmers are people, too, and we want them to be able to map from their understanding of user needs to their understanding of the code. Object-oriented programming languages traditionally afford no way to capture collaborations between objects. They don’t capture algorithms that flow over those collaborations. Like the domain structure captured by object instances, these collaborations and interactions also have structure. They form part of the end user mental model, but you can’t find a cohesive representation of them in the code. For example, users have expectations for their interactions with a spell-checker in a word processor and have a preconceived notion of its interactions with the text, with some dictionary, and with the end user. Which object should encapsulate the spell-checking operation in a word processor: The editing buffer? The dictionary? Some global spell-checker object? Some of these options lead to poor cohesion of the object that hosts spell checking while other options increase the coupling between objects.
In this article, we’ll show how to combine roles, algorithms, objects, and associations between them to provide a stronger mapping between the code and the end-user mental model. The result is an architecture based on the object Data, the Collaborations between objects, and the way that Use Cases scenarios comprise Interactions between roles: the DCI architecture.
Where did we go wrong?
We can trace much of our failure to capture the end user mental model of doing to a kind of object mythology that flourished doing the 1980s and into the first half of the 1990s. Some buzzwords of this mindset included anthropomorphic design, smart objects, and emergent system behavior. We were taught that system behavior should “emerge” from the interaction of dozens, hundreds or thousands of local methods. The word of the day was: think locally, and global behavior would take care of itself. Anyone caught writing a method that looked like a procedure, or caught doing procedural decomposition, was shunned by the OO community as “not getting it.”
In fact, most GUI problems start with the programmer’s inability to capture the end user cognitive model in the code. The MVC framework makes it possible for the user to reason about what the system is: the thinking part of the user cognitive model. But there is little in object orientation, and really nothing in MVC, that helps the developer capture doing in the code. The developer doesn’t have a place where he or she can look to reason about end user behavioral requirements.
Back in the 1960s, we could take the behavioral requirements for a program, and the FORTRAN code that implemented them, and give both of them to an office mate—together with a big red pen—to review whether the code matched the requirements. The overall form of the code reflected the form of the requirements. In 1967, software engineering took away my ability to do this: the algorithm had to be distributed across the objects, because to have a large method that represented an entire algorithm was believed to not be a “pure” object-oriented design. How did we decide to split up the algorithm and distribute its parts to objects? On the basis of coupling and cohesion. Algorithms (methods) had to be collocated with the object that showed the most affinity for the algorithm: optimizing cohesion.
That works fine when an algorithm lives within a single object, as might be true for changing the color of a circle on the screen, or adding a typed character to a word processor’s text buffer. However, interesting business functionality often cuts across objects. The spell-checker in the text editor involves the screen, some menus, the text buffer, and a dictionary. Even for a shapes editor, the problem of calculating overlapping regions belongs to multiple objects. Object-orientation pushed us into a world where we had to split up the algorithm and distribute it across several objects, doing the best piecemeal job that we could.
Back into the Users’ Head
If the goal of object-orientation was to capture end users’ conceptual model of their worlds, it might serve us well to journey back into that space to find out what lurks there. We’ll start with familiar territory: the data model, which most nerds today call objects (but then, to our puzzlement, model and discuss only as classes) and then move on to more dynamic concepts called roles and collaborations. All three of these—the data model, the role model, and the collaboration model—are conceptual concerns independent of programming language. But, of course, one of our goals is that the programming language should be able to express these things. So we’ll also look at programming concepts that express these concepts in code. One of these concepts is called a class (and we’re again on familiar ground), and the second is called a role.
Data: representing the user’s mental model of things in their world
Managing data is arguably the second oldest profession in computer science (we’ll talk about the oldest profession below). The old Data Flow Diagram (DFD) people used to tell us that the data are the stable part of design. This truism carried forward into objects, and object designers were encouraged to look for stable object structures.
A particularly simplistic rule of thumb in early object-oriented design was: nouns (e.g. in the requirements document) are objects, and verbs are methods. This dichotomy naturally fit the two concepts that programming languages could express. Object-oriented programming languages—particularly the “pure” ones—expressed everything in terms of objects or methods on objects. (Of course, most programming languages used classes to do this. The point is that nothing was supposed to exist outside of an object framework.) So if I looked at a Savings Account object, the fact that it was an object led us to capture it as such (or as a class). The fact that it could both decrease its balance and could do a withdrawal were lumped together as methods. Both are behaviors. However, these two behaviors are radically different. Decreasing the balance is merely a characteristic of the data: what it is. To do a withdrawal reflects the purpose of the data: what it does. Being able to handle a withdrawal—which infers transaction semantics, user interactions, recovery, handling error conditions and business rules—far outstrips any notion of a data model. Withdrawal, in fact, is a behavior of the system and entails system state, whereas reducing the balance is what makes an account an account and relates only to the object state. These two properties are extremely different in kind from the important perspectives of system architecture, software engineering, and maintenance rate of change. Object-orientation lumped them into the same bucket.
The problem with this approach is this: If objects are supposed to remain stable, and if all of the code is in objects, then where do I represent the parts that change? A key, longstanding hallmark of a good program is that it separates what is stable from what changes in the interest of good maintenance. If objects reflect the stable part of the code, there must be a mechanism other than objects to express requirements changes in the code, supporting the Agile vision of evolution and maintainability. But objects are stable—and in an object-oriented program, there is no “other mechanism.”
Stuck with these artificial constraints, the object world came up with an artificial solution: using inheritance to express “programming by difference” or “programming by extension.” Inheritance is perhaps best understood as a way to classify objects in a domain model. For example, an exclusive-access file may be a special kind of disk file, or magnetic, optical and mechanical sensors might be different implementations of the more general notion of sensor. (You might object and say that this is subtyping rather than inheritance, but few programming languages distinguish the expression of these two intents). Because inheritance could express variations on a base, it quickly became a mechanism to capture behavioral additions to a “stable” base class. In fact, this approach became heralded as an honorable design technique called the open-closed principle: that a class was closed to modification (i.e., it had to remain stable to capture the stability of the domain model) but open to extension (the addition of new, unanticipated code that supported new user behaviors). This use of inheritance crept out of the world of programming language into the vernacular of design.
Somewhere along the line, statically typed languages got the upper hand, supported by software engineering. One important aspect of static type system analysis was the class: a construct that allowed the compiler to generate efficient code for method lookup and polymorphism. Even Smalltalk, whose initial vision of objects and a dynamic run-time environment was truly visionary, fell victim to the class compromise. The class became the implementation tool for the analysis concept called an object. This switch from dynamics to statics was the beginning of the end for capturing dynamic behavior.
Inheritance also became an increasingly common way to express subtyping, especially in Smalltalk and C++. You could cheat in Smalltalk and invoke a method in any class in an object’s inheritance hierarchy whether or not a default implementation appeared in the base class. It would work, but it exacerbated the discovery problem, because the base class interface wasn’t representative of the object’s total behavior. The statically typed languages created a culture of inheritance graphs as design abstractions in their own right, fully represented by the base class interface. But because programming by extension took place at the bottom of the hierarchy, newly added methods either didn’t appear in the base class—or, worse, needed to be added there (e.g., as pure virtual functions in C++) every time the inheritance hierarchy was extended to incorporate a new method.
The alternative was to take advantage of static typing, and to let clients of a derived class have access to the class declaration of classes that were added for programming-by-extension. That preserved the “integrity” of the base class. However, it also meant that statically typed languages encouraged cross-links between the buried layers of class hierarchies: an insidious form of violating encapsulation. One result was global header file proliferation. The C++ world tried to respond with RTTI and a variety of other techniques to help manage this problem while the community of dynamically typed languages shrugged and noted that this wasn’t a problem for them.
The rhetoric of the object community started turning against inheritance in the mid-1980s, but only out of a gut feel that was fueled by a few horror stories (inheritance hierarchies 25 deep) and the resulting software engineering nightmare of trying to trace business behavior back into the code.
In the end, this whole sordid story suggests that extension by derivation was a less-than-ideal solution. But, in fact, inheritance wasn’t the most infested fly in the ointment. Most such code changes could be traced back to behavioral requirements changes, and most such changes were driven by end users’ desire for new behaviors in the code. Software is, after all, a service and not really a product, and its power lies in its ability to capture tasks and the growth and changes in tasks. This is particularly credible in light of the argument (well-sustained over the years) that the data model is relatively stable over time. The discord between the algorithm structure and domain structure would be the ultimate undoing of classes as units of growth; we’ll get back to that below.
There is another key learning that we’ll carry forward from this perspective: that domain classes should be dumb. Basic domain objects represent our primordial notions of the essence of a domain entity, rather than the whole universe of processes and algorithms that burdens traditional object-oriented development as Use Cases pile up over time. If I asked you what a Savings Account object can do, you’d be wise to say that it can increase and decrease its balance, and report its balance. But if you said that it can handle a deposit, we’re suddenly in the world of transactions and interactions with an ATM screen or an audit trail. Now, we’ve jumped outside the object and we’re talking about coupling with a host of business logic that a simple Savings Account has no business knowing about. Even if we decided that we wanted to give objects business intelligence from the beginning, confident that we could somehow get it right so the interface wouldn’t have to change much over time, such hopes are dashed by the fact that initial Use Cases give you a very small slice of the life-time code of a system. We must separate simple, stable data models from dynamic behavioral models.
Roles: a (not so) new concept of action that also lives in users’ heads
So let’s go back into the user’s head and revisit the usual stereotype that everything up there is an object. Let’s say that we’re going to build an ATM and that one Use Case we want to support is Money Transfer. If we were to ask about your fond remembrances of your last account funds transfer, what would you report? A typical response to such a question takes the form, “Well, I chose a source account and a transfer amount, and then I chose a destination account, and I asked the system to transfer that amount between the accounts.” In general, people are often a little less precise than this, using words like “one account” and “another account” instead of “source account” and “destination account.”
Notice that few people will say “I first picked my savings account, and then an amount, and then picked my investment account…” and so forth. Some respondents may actually say that, but to go to that level artificially constrains the problem. If we look at such scenarios for any pair of classes, they will be the same, modulo the class of the two accounts. The fact is that we all carry, in our heads, a general model of what fund transfer means, independent of the types of the account involved. It is that model—that interaction—that we want to mirror from the user’s mind into the code.
So the first new concept we introduce is roles. Whereas objects capture what objects are, roles capture collections of behaviors that are about what objects do. Actually, it isn’t so much that the concept of roles is new as it is unfamiliar. Role-based modeling goes back at least to the OORAM method, which was published as a book in 1996.3 Roles are so unfamiliar to us because so much of our object thinking (at least as nerds) comes from our programming languages, and languages have been impoverished in their ability to express roles.
The interactions that weave their way through the roles are also not new to programming: we call them algorithms, and they are probably the only design formalism that predates data as having their own vocabulary and rules of thumb. What’s interesting is that we consciously weave the algorithms through the roles. It is as if we had broken down the algorithm using good old procedural decomposition and broken the lines of decomposition along role boundaries. We do the same thing in old-fashioned object modeling, except that we break the lines of procedural decomposition (methods) along the lines of object boundaries.
Unfortunately, object boundaries already mean something else: they are loci of encapsulated domain knowledge, of the data. There is little that suggests that the stepwise refinement of an algorithm into cognitive chunks should match the demarcations set by the data model. Old-fashioned object orientation forced us to use the same mechanism for both demarcations, and this mechanism was called a class. One or the other of the demarcating mechanisms is likely to win out. If the algorithmic decomposition wins out, we end up with algorithmic fragments landing in one object but needing to talk to another, and coupling metrics suffer. If the data decomposition wins out, we end up slicing out just those parts of the algorithm that are pertinent to the topic of the object to which they are assigned, and we end up with very small incohesive methods. Old-fashioned object orientation explicitly encouraged the creation of such fine-grain methods, for example, a typical Smalltalk method is three statements long.
Roles provide natural boundaries to carry collections of operations that the user logically associates with each other. If we talk about the Money Transfer example and its roles of Source Account and Destination Account, the algorithm might look like this:
Account holder chooses to transfer money from one account to another
System displays valid accounts
User selects Source Account
System displays remaining valid accounts
Account holder selects Destination Account
System requests amount
Account holder inputs amount
Move Transferred Money and Do Accounting
The Use Case Move Transferred Money and Do Accounting might look like this:
System verifies funds are available
System updates the accounts
System updates statement information
The designer’s job is to transform this Use Case into an algorithm that honors design issues such as transactions. The algorithm might look like this:
Source account begins transaction
Source account verifies funds available (notice that this must be done inside the transaction to avoid an intervening withdrawal!)
Source account reduces its own balance
Source account requests that Destination Account increase its balance
Source Account updates its log to note that this was a transfer (and not, for example, simply a withdrawal)
Source account requests that Destination Account update its log
Source account ends transaction
Source account informs Account Holder that the transfer has succeeded
The code for this algorithm might look like this:
template
class TransferMoneySourceAccount: public MoneySource
{
private:
ConcreteDerived *const self() {
return static_cast<ConcreteDerived*>(this);
}
void transferTo(Currency amount) {
// This code is reviewable and
// meaningfully testable with stubs!
beginTransaction();
if (self()->availableBalance() < amount) {
endTransaction();
throw InsufficientFunds();
} else {
self()->decreaseBalance(amount);
recipient()->increaseBalance (amount);
self()->updateLog("Transfer Out", DateTime(),
amount);
recipient()->updateLog("Transfer In",
DateTime(), amount);
}
gui->displayScreen(SUCCESS_DEPOSIT_SCREEN);
endTransaction();
}
It is almost a literal expansion from the Use Case. That makes it more understandable than if the logic is spread over many class boundaries that are arbitrary with respect to the natural organization of the logic—as found in the end user mental model. We call this a methodful role—a concept we explore more thoroughly in the next section.
Whole objects, each with two kinds of know-how
At their heart, roles embody generic, abstract algorithms. They have no flesh and blood and can’t really do anything. At some point it all comes down to objects—the same objects that embody the domain model.
The fundamental problem solved by DCI is that people have two different models in their heads of a single, unified thing called an object. They have the what-the-system-is data model that supports thinking about a bank with its accounts, and the what-the-system-does algorithm model for transferring funds between accounts. Users recognize individual objects and their domain existence, but each object must also implement behaviors that come from the user’s model of the interactions that tie it together with other objects through the roles it plays in a given Use Case. End users have a good intuition about how these two views fit together. For example, end users know that their Savings Accounts take on certain responsibilities in the role of a Source Account in a Money Transfer Use Case. That, too—the mapping between the role view and data view—is also part of the user cognitive model. We call it the Context of the execution of a Use Case scenario.
We depict the model in Figure 3. On the right we capture the end user role abstractions as interfaces (as in Java or in C#; in C++, we can use pure abstract base classes). These capture the basic architectural form, to be filled in as requirements and domain understanding grow. At the top we find roles that start as clones of the role abstractions on the right, but whose methods are filled in. For a concept like a Source Account in a Money Transfer Use Case, we can define some methods independent of the exact type of object that will play that role at run time. These roles are generic types, analogous to Java or Ada generics or C++ templates. These two artifacts together capture the end user model of roles and algorithms in the code.
Figure 3
Figure 3. Combining Structure and Algorithm in a Class
On the left we have our old friends, the classes. Both the roles and classes live in the end user’s head. The two are fused at run time into a single object. Since objects come from classes in most programming languages, we have to make it appear as though the domain classes can support the business functions that exist in the separate source of the role formalisms. At compile time programmers must face the end user’s models both of Use Case scenarios and the entities they operate on. We want to help the programmer capture those models separately in two different programming constructs, honoring the dichotomy in the end user’s head. We usually think of classes as the natural place to collect such behaviors or algorithms together. But we must also support the seeming paradox that each of these compile-time concepts co-exists with the other at run time in a single thing called the object.
This sounds hard, but even end users are able to combine parts of these two views in their heads. That’s why they know that a Savings Account—which is just a way of talking about how much money I can access right now through a certain key called an account number—can be asked to play the role of a Source Account in a Money Transfer operation. So we should be able to snip operations from the Money Transfer Use Case scenario and add them to the rather dumb Savings Account object. Figure 3 shows such gluing together of the role logic (the arcs) and the class logic (rounded rectangles). Savings Account already has operations that allow it to carry out its humble job of reporting, increasing, or decreasing its balance. These latter operations, it supports (at run time) from its domain class (a compile-time construct). The more dynamic operations related to the Use Case scenario come from the roles that the object plays. The collections of operations snipped from the Use Case scenario are called roles. We want to capture them in closed form (source code) at compile time, but ensure that the object can support them when the corresponding Use Case comes around at run time. So, as we show in Figure 4, an object of a class supports not only the member functions of its class, but also can execute the member functions of the role it is playing at any given time as though they were its own. That is, we want to inject the roles’ logic into the objects so that they are as much part of the object as the methods that the object receives from its class at instantiation time.
Figure 4
Figure 4. Combining Structure and Algorithm in an Object
Here, we set things up so each object has all possible logic at compile time to support whatever role it might be asked to play. However, if we are smart enough to inject just enough logic into each object at run time, just as it is needed to support its appearance in a given role, we can do that, too.
Roles working together: Contexts and Interactions
When I go up to an ATM to do a money transfer, I have two objects in mind (let’s say that they are My Savings Account and My Investment Account), as well as a vision of the process, or algorithm, that takes money from some Source Account and adds it to some Destination Account in a way that is agreeable to both me and the bank. (It’s probably true that My Savings Account isn’t actually an object in a real bank, but it probably is an object within the realm of the ATM. Even if it isn’t, there are some nice generalizations in DCI that cause it not to matter.) I also have a notion of how to map between these two. I establish that mapping, or context, as I interact with the ATM.
First, I probably establish that I want to do a funds transfer. That puts a money-transfer scenario in my mind’s “cache,” as well as bringing some kind of representation of the roles and algorithms into the computer memory. We can capture these scenarios in terms of roles, as described above.
Second, I probably choose the Source Account and Destination account for the transfer. In the computer, the program brings those objects into memory. They are dumb, dumb data objects that know their balance and a few simple things like how to increase or decrease their balance. Neither account object alone understands anything as complex as a database transaction: that is a higher-order business function related to what-the-system-does, and the objects individually are about what-the-system-is. The higher-level knowledge doesn’t live in the objects themselves but in the roles that those objects play in this interaction.
Now I want to do the transfer. For the transfer to happen, I need My Savings Account to be able to play the role of Source Account, and the My Investment Account object to play the role of the Destination Account. Imagine that we could magically glue the member functions of the roles onto their respective objects, and then just run the interaction. Each role “method” would execute in the context of the object into which it had been glued, which is exactly how the end user perceives it. In the next section of this article we’ll look exactly at how we give the objects the intelligence necessary to play the roles they must play: for the time being, imagine that we might use something like delegation or mix-ins or Aspects. (In fact each of these approaches has at least minor problems and we’ll use something else instead, but the solution is nonetheless reminiscent of all of these existing techniques.)
Figure 5. Mapping Roles to Objects
The arrow from the Controller and Model into the Context just shows that the Controller initiates the mapping, perhaps with some parameters that give hints about the mapping, and that the Model objects are the source of most mapping targets. The Methodless Roles are identifiers through which application code (in the Controller and in Methodful Roles) accesses objects that provide services available through identifiers of that type. This becomes particularly useful in languages with compile-time type checking, as the compiler can provide a modicum of safety that ensures, at compile time, that a given object can and will support the requested role functionality.
By this time, all the objects necessary to affect the transfer are in memory. As indicated above, the end user also has a process or algorithm in mind to do the money transfer in terms of the roles involved. We need to pick out code that can run that algorithm, and then all we have to do is line up the right objects with the right roles and let the code run. As shown in Figure 5, the algorithm and role-to-object mapping are owned by a Context object. The Context “knows” how to find or retrieve the objects that become the actual actors in this Use Case, and “casts” them to the appropriate roles in the Use Case scenarios (we use the term “cast” at least in the theatrical sense and conjecturally in the sense of some programming language type systems). In a typical implementation there is a Context object for each Use Case, and each Context includes an identifier for each of the roles involved in that Use Case. All that the Context has to do is bind the role identifiers to the right objects. Then we just kick off the trigger method on the “entry” role for that Context, and the code just runs. It might run for nanoseconds or years—but it reflects the end user model of computation.
Now we have the complete DCI architecture:
The data, that live in the domain objects that are rooted in domain classes;
The context that brings live objects into their positions in a scenario, on demand;
The interactions, that describe end-user algorithms in terms of the roles, both of which can be found in end users’ heads.
As shown in Figure 5, we can think of the Context as a table that maps a role member function (a row of the table) onto an object method (the table columns are objects). The table is filled in based on programmer-supplied business intelligence in the Context object that knows, for a given Use Case, what objects should play what roles. A method of one role interacts with other role methods in terms of their role interfaces, and is also subject to the role-to-object mapping provided by the Context. The code in the Controller can now deal with business logic largely in terms of Contexts: any detailed object knowledge can be written in terms of roles that are translated to objects through the Context.
One way of thinking about this style of programming is that it is a higher order form of polymorphism than supported by programming languages. In fact, all of the polymorphism can be under programmer control: roles are explicitly mapped to objects, and every role method invocation can be statically bound. This makes the code straightforward to analyze and understand statically. Compare that with the usual implementation of method dispatch in an object-oriented programming language, where it is in general impossible to determine where a method invocation will end up through a static analysis of the code.
In some implementations the Context also does the injection of the business logic methods into the domain objects. This is particularly true in implementations based on dynamic languages such as Python and Ruby. In C++ and C# we usually “pre-load” all of the business logic methods by injecting them at the class level, which can be done even at compile time. In Scala we can achieve a hybrid when creating an object from a domain class by injecting the role methods as part of the instantiation. (Scala is really doing the same thing as C++ and C#, but it has a nice syntax of specifying mixins at instantiation points. The Scala compiler will generate an anonymous class that pre-loads all of the business logic methods, and that class is intantiated just at that one point.) When the object comes into existence it has a hybrid type that offers the behaviors both of the base domain class as well as the Use Case roles.
Nested Contexts
One can imagine building rich Context objects that define whole subgraphs of self-contained role relationships: relationships so stable that they constitute a kind of domain in their own right. If these Context objects have a small number of public methods they can behave like domain objects. Consider a Savings Account, which is often wrongly used as an example of a class in simple courses on object orientation. A Savings Account is really a collection of behaviors on roles, where the roles are transactions, transaction logs, and audit trails. If Savings Account is a Context, it can map these roles onto the right objects for a given method (e.g., to calculate the balance of the account or to generate a monthly statement) and then kick off the computation on the suitable role. The Savings Account Context can be used as a domain object by “higher-level” Context objects, and it can call on Context objects below it. This is a powerful concept supporting a multi-tiered domain model.
Traits as the design trick to combine characteristics and purpose
The question is: how do we do this? And the punch line is a concept called a trait. If a role is an analysis concept (from the mind of the end user), then a trait is a general design concept that represents the role, and its implementation varies from programming language to programming language. For example, we can represent traits in C++ as templates whose member functions are composed with those of a concrete class at compile time, so that the object exhibits both the class and template behaviors at run time.
. . . .
template
public:
void transferTo(Currency amount) {
beginTransaction();
if (self()->availableBalance() < amount) {
. . . .
}
. . . .
class SavingsAccount:
public Account,
public TransferMoneySourceAccount
public:
void decreaseBalance(Currency amount) {
. . . .
}
}
. . . .
In Scala, traits are implemented by a language construct called, curiously enough, a trait, whose methods can be injected into an object at instantiation time.
. . . .
trait TransferMoneySourceAccount extends SourceAccount {
this: Account =>
// This code is reviewable and testable!
def transferTo(amount: Currency) {
beginTransaction()
if (availableBalance < amount) {
. . . .
}
}
. . . .
val source = new SavingsAccount with TransferMoneySourceAccount
val destination = new CheckingAccount with TransferMoneyDestinationAccount
. . . .
In Squeak Smalltalk, we implement methodful roles with Squeak Traits used according to certain conventions pioneered by Schärli4 and inject a trait’s methods into appropriate Data classes by adding its methods to the class method tables at compile time.
. . . .
RoleTrait named: #TransferMoneySource
uses: {}
roleContextClassName: #MoneyTransferContext
category: ‘BB5Bank-Traits’
. . . .
TransferMoneySource»transfer: amount
self balance < amount
ifTrue: [self notify: ‘Insufficient funds’. ^self].
. . . .
Object subclass: #Account
uses: TransferMoneySource
instanceVariableNames: ‘balance’
classVariableNames: ‘’
poolDictionaries: ‘’
category: ‘BB5Bank-Data’
DCI implementations also exist in C#/.Net (Christian Horsdal Gammelgaard), Ruby (Steen Lenmann), Python (David Byers and Serge Beaumont), and Groovy (Lars Vonk). The Qi4J environment (Richard Öberg and Steen Lehmann) is pushing forward the ability to express traits in a Java environment.
Properties of DCI
We use roles to capture the main user concepts that participate in a Use Case requirement. Roles are first-class components of the end user cognitive model, so we want to reflect them in the code. Semantically, these roles map closely to the concept of interfaces in Java or .Net. However, we use interfaces to capture only the overall form of the behavioral design. Ultimately, our goal is to capture the Use Cases in code, and we’ll use other language features to do that. The approach varies with programming language. In Squeak and Scala, we can use traits directly. In C++ we can simulate traits using templates. In other languages, we can use classes together with some tricks that associate methods of one class with an object of another.
We use objects to capture the deep domain concepts that come from experience and tacit knowledge, as barely smart data. In the old days we distributed responsibilities to classes using CRC cards (Classes, Responsibilities, and Collaborations). But it isn’t classes that exhibit responsibilities: roles do. We find this when we ask people to elicit their recollection of some activity: people talk about the task of ordering a book, of transferring money between accounts, and most such transactions as involving roles rather than classes.
The software exhibits the open-closed principle Whereas the open-closed principle based on inheritance alone led to poor information hiding, the DCI style maintains the integrity of both the domain classes and the roles. Classes are closed to modification but are opened to extension through injection of roles.
DCI is a natural fit for Agile software development. It allows programmers to connect directly with the end user mental model (going beyond just customers to engage end user interactions instead of processes and tools). We can therefore use shared customer vocabulary and iterate the code side by side with them (customer collaboration over contract negotiation). We can reason about the form of task sequencing (which greatly raises the chance of delivering working software—because at least the programmer can understand it, and the translation distance to the end user mental model is much shorter). And, last but not least, it separates the rapidly changing Use Case part from the stable domain part so that we embrace change. Each of these benefits ties directly to a provision of the Agile Manifesto (http://www.agilemanifesto.org).
Other bits
There are certainly other models in the user’s head. One common darling of some software engineering camps is business rules. DCI doesn’t provide a convenient home to capture rules; that is perhaps a weakness in the same way that the failure to capture interactions was a weakness of primordial object orientation. Many other formalisms, such as states and state transitions, can be viewed as derived models that come from the data and usage models. For example, I know that it makes sense to depress the accelerator on my car only if I am in a state where the gearbox is engaged; the state machine representation of this constellation would show an allowable accelerator “message” to occur only in the gearbox “state.” However, this transition can also be viewed as a sequence of steps that are described in terms of roles (accelerator, gearbox, engine). A quick check with our intuition suggests that this latter model is a better fit for our intuition, while the state machine model may be a better fit for a nerd-centric view.
However, we offer no firm research evidence for such conclusions. In the interest of full disclosure, this is an area where we believe additional research could bear fruit. However, lacking a complete picture is probably not a good reason to move to a more faithful picture, and we view DCI as an important step in that direction.
DCI fulfilling a bit of history
DCI is in many respects a unification of many past paradigms that have appeared as side-cars to object-oriented programming over the years.
Though aspect-oriented programming (AOP) has other uses as well, DCI meets many applications of AOP and many of the goals of Aspects in separating concerns. In line with the fundamental principles underlying AOP, DCI is based on a deep form of reflection or meta-programming. Unlike Aspects, Roles aggregate and compose nicely. Contexts provide a scoped closure of association between sets of roles, while Aspects only pair with the objects to which they are applied.
In many ways DCI reflects a mix-in style strategy, though mix-ins themselves lack the dynamics that we find in Context semantics.
DCI implements many of the simple goals of multi-paradigm design, in being able to separate procedural logic from object logic. However, DCI has much better coupling and cohesion results than the more brute-force techniques of multi-paradigm design offer.
End notes
A Personal Computer for Children of All Ages, Alan Kay, Xerox Palo Alto Research Center, 1972 (http://www.mprove.de/diplom/gui/Kay72a.pdf)
IFIP-ICC Vocabulary of Information Processing; North-Holland, Amsterdam, Holland. 1966; p. A1-A6.
Working with Objects: The Ooram Software Engineering Method., Reenskaug, Trygve, Wold, P., Lehne, O. A., Greenwich: Manning Publications, 1996.
Traits: The Formal Model, N. Schärli, Nierstrasz, O; Ducasse, S; Wuyts, R; Black, A; “Traits: The Formal Model,” Technical Report, no. IAM-02-006, Institut für Informatik, November 2002, Technical Report, Universität Bern, Switzerland, Also available as Technical Report CSE-02-013, OGI School of Science & Engineering, Beaverton, Oregon, USA
Acknowledgments
Many thanks for comments and a Scala code example from Bill Venners.
Share your opinion
Have an opinion on the ideas presented in this article? You can discuss this article in the Articles Forum Topic, The DCI Architecture.
About the authors
Trygve Reenskaug has 50 years experience with the development of professional software and software engineering methodologies. He is now a researcher and professor emeritus of informatics at the University of Oslo. He has extensive teaching and speaking experience including keynotes, talks and tutorials. His firsts include the Autokon system for computer aided design of ships with end user programming, structured programming and a data base oriented architecture (1960). Object oriented applications (1973). Model-View-Controller, the world’s first reusable object oriented framework (1979). OOram role modeling method and tool (1983). The premier book on role modeling (1995). He was a member of the UML Core Team, adding parts of the role modeling technology under the name of Collaborations. He has develped the DCI paradigm for high level programming of object system state and behavior. He is currently working on BabyIDE; a companion development environment for working with a program as seen in different perspectives such as Data, Communication and Interaction (DCI).
Jim Coplien is a Senior Agile Coach and System Architect at Nordija A/S, doing international consulting in organization structure, software patterns, system architecture, as well as software development in electronic design automation, telecom, and finance. In this ‘blog, he reflects on his academic career pursuant to his visiting professorship at University of Manchester Institute of Science and Technology, his appointment as the 2003-2004 Vloebergh Chair at Vrije Universiteit Brussel, two years as an Associate Professor at North Central College in Naperville, Illinois, and extensive work developing some of the first C++ and OOD training materials. He is well-known for his foundational work on object-oriented programming and on patterns, and his current research explores the formal foundations of the theory of design, foundations of aesthetics and beauty, and group theoretic models of design structure. He most recent book “Organizational Patterns of Agile Software Development”, co-authored with Neil Harrison, culminates a decade of research. His book “Advanced C++ Programming Styles and Idioms” defined design and programming techniques for a generation of C++ programmers, and his book “Multi-Paradigm Design for C++” presents a vision for the evolution of current OO design techniques into a more general and better grounded theory of software construction.