Application Lifecycle Management – UML – Ending Poor Software Reliability and Stability   Leave a comment

The Art or Craft of designing, developing, testing, validating, integrating, implementing, and managing software is deteriorating. “Developing” reliable, serviceable, and stable software is just as much an art or craft as it is a science.  Six components of producing good software have shown rapid deterioration. These are design, testing, validation, integration, implementation, and management.

Why? They are all very costly to service and support. Often little attention is given to properly manage all these components using an integrated approach. This leaves the development of the software code unsupported. Often software code development is prematurely started before the design is even partially completed. And testing and validation have become lost skill sets from lack of attention. Without these skill sets, proper integration and implementation is impossible. Finally, managing the implemented environment becomes a nightmare due to design flaws and a lack of reliability and stability due to the lack of integration and/or proper implementation.

Many methodologies are available for ensuring the production of reliable software. Most are misunderstood or short-circuited by taking short-cuts. It really doesn’t matter what you use … ITIL, CMMi, Agile Software Development, or one of the popular SDLC‘s. They all seem to fail in the face of escalating costs and delays. Yes, they are all expensive and can be cumbersome to use.

However, recently there has been a move towards automated Application Lifecycle Management (ALM) solutions. Are they expensive? You bet. Do they require significant training and experience to use properly? Of course! Is anything valuable inexpensive? Are they a complete or perfect solution? No, they are new technology and just becoming effective! But they are evolving. And what I have seen of the available solutions provides significantly improved design and software code. They offer integrated project management throughout the Application Lifecycle. Testing, validation, and problem determination technologies are also improving. This will facilitate proper integration and implementation. But it all depends on how we manage these new evolving technologies.

Software cannot afford to be expensive to maintain because it directly impacts the ability of the enterprise to operate profitably. The software has to be manageable and adaptable to survive. I have seen software systems run for 20 or 30 years if properly designed and managed. Today, most newly implemented software does not last for more than five years before it has to be replaced.

Why? Agile Software Development is misunderstood and not properly implemented. One must fully understand the Enterprise and/or Corporate Entity‘s Business Process Model. This model must be a tangible asset represented as a blueprint or schematic which represents the business process entities. These Entities must be defined as Objects with proper Classifications and Sub-Classifications with their related business process methods or operations and the data elements they utilize, modify, integrate, aggregate, and relate. The Entity Classes, associated Objects with their process methods/operations, and relationships and actions on their data must have their workflow processes modeled with relationships to the persistent data storage model.

How is this done? There are many powerful Business Process and Workflow Modeling Tools; Entity Object and Object Oriented Designing Tools; and Data Modeling and Entity Relationship Diagramming Tools available but they are typically not integrated and have a great deal of overlap which only confuses and obfuscates the objective of designing a synergized holistic design.

Fortunately there is a solution to this problem. The Unified Modeling Language provides for a truly synergized holistic design. UML is a standardized general-purpose modeling language in the field of object-oriented software engineering. The standard is managed, and was created, by the Object Management Group and has been continually improved since 1997. With 15 years of evolutionary improvements UML has become a very powerful tool.

UML is used to specify, visualize, modify, construct and document the artifacts of an object-oriented software-intensive system under development. UML offers a standard way to visualize a system’s architectural blueprints, including elements such as:

UML combines techniques for data modeling (entity relationship diagrams), business modeling (work flows), object modeling, and component modeling. It can be used with all processes, throughout the software development life cycle, and across different implementation technologies.

UML must be programming language and compiler independent yet provide a powerful pseudo-programming language to allow for plug-ins to enable UML to generate program code for many programming languages and database schema definition and database management languages. Besides supporting a variety of DBMS management languages, the plug-ins should also provide the interfaces and API’s for online and batch transaction processing, web services technologies, web application technologies, message queuing technologies, and various middle-ware technologies for a variety of processing platforms.

ALM products must provide comprehensive ULM facilities with plug-ins and/or interfaces for a wide variety of programming languages and extensive extensions for OLTP, DBMS’s, Web Services (WSDL, HTML/CSS, XML, SOAP, REST, etc.); and Web Application Services (Java, JavaScript, JSON, Ajax, Node.js, PHP, Python, .NET, Ruby, jQuery, Wicket).

ALM products must provide the ability to interface to a wide variety of IDE’s. IDE’s tend to be designed to support a given platform or closely related platforms. Thus there will be IDE’s specifically designed for IBM Mainframes; others for UNIX, or Linux, or even UNIX/Linux – NIX; Microsoft Windows and Windows Servers; Apple MacIntosh; Apple iOS; Google Android; Google Chrome; Mozilla Firefox; etc.. Many IDE’s have the ability to accept other IDE’s as plug-ins. This is becoming more common as IDE’s move to a more distributed computing model.

Cloud Computing has already appeared as a computing platform. IDE’s and ALM’s are trying to quickly adapt to this new computing paradigm. This will make the distributed computing model even more complex.

Investment in application design, programming/development, testing, implementation and maintenance is going to continue to grow with the increasing demand of more complex distributed computing models. This will require ALM tools to be augmented with risk assessment technologies; better integration of project management technologies; truly automated distributed processing application build technologies; sophisticated and integrated test, debugging, and mitigation technologies; sophisticated distributed system configuration administration and management technologies including performance monitoring and tuning technologies; and asset management technologies.

Why? Without a complete integrated suite of formal development paradigms and integrated IDE’s, ALM, Risk Assessment, and Asset Management technologies to manage distributed computing model the Total Cost of Ownership reaches unacceptable levels and Return on Investment is never realized.

The tools we choose are important. But experience when using them will offer the biggest payoffs.


Software Configuration Management – SCM’s Many Usages and Definitions   1 comment

We often misuse the term SCM or SCCM out of context and without a true understanding to its specification and scope. The acronym has been used for:

  • System Change Management
  • System Configuration Management

The acronym also is used in Business Process Models to describe  a supply and logistics application:

The term has been used with and without version and/or release control;  version and/or release builds; restricted and non-restricted  revision control; and trunk and/or branch dependent or independent revision control.

SCM has also been used in conjunction with Application Life Cycle Management and Software Development Life Cycle; and with Project Management and Problem Management resource and task/assignment control.

In ITIL it has been associated with UML. In Object Oriented Analysis and Design it has been associated with Object Oriented Modeling and Programming;  and Object Oriented Mapping, Components, and Construction.

SCM in its Source, Software, and System Management capacities is an integral part of all three Software Engineering Institute CMMi Models:

  1. Product and service development – CMMI for Development (CMMI-DEV)
  2. Service establishment, management, and delivery – CMMI for Services (CMMI-SVC)
  3. Product and service acquisition – CMMI for Acquisition (CMMI-ACQ).

Under the CMMi the Core Process Area of Configuration Management (CM) is used within all three CMMi Models listed above. In CMMi Configuration Management is listed under Process Area – Support. However Configuration Management is an integral process in design, development, implementation, and support. Under CMMi CM, SCCM/SCM provides all the facilities of source code change and control management; software change, control, and configuration management; and system change and configuration management.

In my blog posting “IT Configuration Management using the EIML Model“,  I discuss IT Infrastructure Integrated System Stacks (ITIISS). Configuration Management becomes a systemic process through all layers of the ITIISS Model. This ITIISS Model from top to bottom includes:

  •  Secure Sockets/Graphical User Interfaces
  • Configuration Management Technologies:
    • Security
    • SCM
    • Backup, Recover, and Archiving Administration Technologies
  • Administration Technologies:
    • DBA Administration
    • System Administration
    • Server Administration
  • Business Process Model Processes:
    • Application Development Technologies
    • Business Analytic and Intelligence Technologies
  • Transaction Processing Services:
    • Online Transaction Processing Services
    • Intercommunication Services
  • Application Service Technologies:
    • Database and Data Warehouse Technologies
    • Web Server Technologies
    • Application Server and J2EE Technologies
    • High-Level Programming Language Run-time Environments
    • File Management Technologies
  • Communication Layer Technologies:
    • Telecommunications Software
    • Intercommunication Technologies
    • Middleware Technologies
  • Software Service Technologies for IT Hardware:
    • Firmware
    • Hyper-Visors
    • Operating System(s)
    • File System Management
  • Platforms:
    • Telecom
    • Networking Fabric
    • Processor Servers
    • Disk Storage
    • Virtual Tape Storage
    • Magnetic Tape Libraries
    • Cabling
  • Environment:
    • Racks
    • Cables and Wiring
    • HVAC
    • Fire Suppression Systems
    • Industrial Central Electrical Circuit Breaker Panel
    • Lighting
    • Power Distribution Panels
    • Alternate Power Units (Batteries)
    • Electric Generators
Enterprise Information Management Logistics Graphic Model

Enterprise Information Management Logistics Technologies Graphic Model

For more information see “Enterprise Information Management Logistics – EIML“.

There are many Source  Code  Management and Software Configuration Management solutions available which are free, open source, or proprietary. They include, but are not limited to:

RCS Local Open Source
PVCS Client Server Proprietary
CVS Client Server Open Source
CVSNT Client Server Open Source
Subversion Client Server Open Source
Software Change Manager Client Server Proprietary
ClearCase for Rational Client Server Proprietary
Visual SourceSafe Client Server Proprietary
Perforce Client Server Proprietary
StarTeam Client Server Proprietary
MKS Integrity Client Server Proprietary
AccuRev SCM Client Server Proprietary
SourceAnywhere Client Server Proprietary
SourceGear Vault Client Server Proprietary
Team Foundation Client Server Proprietary
ClearQuest for Rational Team Concert Client Server Proprietary
Rational Team Concert  – RTC Client Server  – ALM Proprietary
GNU arch Distributed Open Source
Darcs Distributed Open Source
DCVS Distributed Open Source
SVK Distributed Open Source
Monotone Distributed Open Source
Codeville Distributed Open Source
Git Distributed Open Source
Mercurial Distributed Open Source
Bazaar Distributed Open Source
Fossil Distributed Open Source
Veracity Distributed Open Source
TeamWare Distributed Proprietary
Code Co-op Distributed Proprietary
BitKeeper Distributed Proprietary
Plastic SCM Distributed Proprietary
CA-Panvalet Mainframe Proprietary
CA-Librarian Mainframe Proprietary
CA-Endevor Mainframe  – ALM Proprietary
CA-SCM Software Change Manager Client Server – ALM Proprietary
ISPW Software Configuration Manager MF/Distributed – ALM Proprietary

Agile Software Development’s Roots in Chief Programmer Teams   3 comments

Many Techno-Babblers are unaware that the phenomenon of “Agile Software Development” is a paradigm began as “Chief Programmer Teams” in the 1960’s, maturing and evolving into a complete methodology by the late 1970’s.

Today the “Agile Programming” paradigm, all too often, is reduced to its lowest common denominator.  This reduces the Agile Methodology to a process of quick and dirty hacking with little if any quality assurance factored in. Source Code Management; Version Control; Controlled and fully tested builds; and “Delivery Services” implementation paradigms are  loosely controlled if controlled at all. In essence all Configuration Management Principles are ignored or thinly threaded through, to result in an unpredictable; poor performing; maintenance nightmare.

The refined and formalized “Agile Methodology” is defined as:

” … software development incorporating a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration between self-organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery, a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual framework that promotes foreseen interactions throughout the development cycle. The Agile Manifesto introduced the term in 2001.”

The Twelve Principles of Agile Software

  1. The highest priority is to satisfy the customer through early and continuous delivery of valuable software.
  2. Welcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.
  3. Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  4. Business people and developers must work together daily throughout the project.
  5. Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.
  6. The most efficient and effective method of conveying information to, and within, a development team is face-to-face conversation. (This would include collaboration technologies like Microsoft Lync and Microsoft Office 365; and collaborative development technologies like IBM developerWorks Collaborative Development and IBM Rational Team Concert for System z and the Jazz platform; Oracle Collaboration Suite Application Development; and Microsoft Azure; or comparable other third-party or open-source technologies, which have been tightly integrated and synergized).
  7. “Working” software is the primary measure of progress. (Working software is defined as that software which conforms to the requirements and specifications of the application characteristics including business processes and workflow; data flow dynamics; information presentation, access, relevancy, retention, and recovery; overall integrity and exception handling; ease of maintenance and enhancements).
  8. Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  9. Continuous attention to technical excellence and good design enhances agility.
  10. Simplicity–the art of maximizing the amount of work not done–is essential.
  11. The best architectures, requirements, and designs emerge from self-organizing teams.
  12. At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Agile Methodology has its roots and a fine pedigree from the development of the “Chief Programmer Team” methodology which began to be formulated in the 1960’s growing into a formal methodology  by 1979. This methodology was augmented by the Capability Maturity Model.

“The Capability Maturity Model (CMM) (a registered service mark of Carnegie Mellon University, CMU) is a development model that was created after study of data collected from organizations that contracted with the U.S. Department of Defense, who funded the research. This model became the foundation from which CMU created the Software Engineering Institute (SEI). The term “maturity” relates to the degree of formality and optimization of processes, from ad hoc practices, to formally defined steps, to managed result metrics, to active optimization of the processes.”

In the 1960s, the use of computers grew more widespread, more flexible and less costly. Organizations began to adopt computerized information systems, and the demand for software development grew significantly. Many processes for software development were in their infancy, with few standard or “best practice” approaches defined.

As a result, the growth was accompanied by growing pains: project failure was common, and the field of computer science was still in its early years, and the ambitions for project scale and complexity exceeded the market capability to deliver adequate products within a planned budget. Individuals such as Harlan D. Mills; Frederick P. Brooks; Edsger Dijkstra; Robert W. Floyd; Edward Yourdon; Larry Constantine; Gerald Weinberg; Tom DeMarco; and David Parnas began to publish articles and books with research results in an attempt to professionalize the software-development processes. Tom DeMarco authored  “Structured Analysis and System Specification”  (ISBN-10: 0138543801 | ISBN-13: 978-138543808); and Christopher (Chris) Gane and Trish Sarson wrote and published “Structured Systems Analysis: Tools and Techniques” (ISBN-10: 0138545472 | ISBN-13: 978-0138545475) in 1979. Both publications were major contributions to the Structured systems analysis and design methodology known as SSADM.

Harlan D. Mills, was the  author of “Chief Programmer Teams, Principles, and Procedures”, IBM Federal Systems Division Report FSC71-5108 (Gaithersburg, Md.) which I believe was published around 1971. As an IBM research fellow, Professor Mills adapted existing ideas from engineering and computer science to software development. These included the structured programming theory of Edsger Dijkstra and Robert W. Floyd (both awarded the Turing Award), as well as others. His Cleanroom software development process emphasized top-down design and formal specification.

Frederick Phillips Brooks, Jr. was a software engineer and computer scientist, best known for managing the development of IBM’s System/360 family of computers and the OS/360 software support package; then later writing candidly about the process in his landmark book “The Mythical Man-Month”. He wrote the paper “No Silver Bullet: Essence and Accidents of Software Engineering” in 1987. F.P. Brooks has received many awards, including the National Medal of Technology in 1985 and the Turing Award in 1999.

Larry LeRoy Constantine, spent several years studying the works of IBM Fellow Harlan Mills: Edsger Dijkstra; and Robert W (Bob) Floyd. Data flow diagrams were proposed by Larry Constantine, the original developer of structured design, based on Martin and Estrin’s “data flow graph” model of computation. He started his working career as a Technical Aid/Programmer at M.I.T. Laboratory for Nuclear Science in 1963. From 1963 to 1966 he was a Staff Consultant and Programmer/Analyst at C E I R, Inc. From 1966 to 1968 he was President of the Information & Systems Institute, Inc. In 1967 he also became a Post-graduate program instructor at the Wharton School of Business, University of Pennsylvania. From 1968 to 1972 he was a faculty member of the I.B.M. Systems Research Institute.

Larry Constantine left SRI in 1972 having began development of a manuscript for “Fundamentals of Program Design: A Structured Approach”. In 1974, after he resumed work on his manuscript, American Edward Nash Yourdon, (a software engineer computer consultant, author and lecturer; and pioneer in software engineering methodology), reviewed Constantine’s manuscript; urging him to complete it. With the combined effort of Larry Constantine and Edward Yourdon, work on the manuscript continued.

As part of structured design, Larry Constantine developed the concepts of cohesion (the degree to which the internal contents of a module are related) and coupling (the degree to which a module depends upon other modules). These two concepts have been influential in the development of software engineering, and stand alone from “Structured-Modular Design” as significant contributions in their own right. They have proved to be the foundation in areas ranging from software design to software metrics, and have become a part of the vernacular of the discipline.

Larry Constantine and Edward Yourdon are known for having a clear influence on methodologies for the creation of efficient and reliable software, with the publication of the definitive work “Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design” by Edward Yourdon/Larry L. Constantine, copyright 1975; published by Yourdon Press in 1975 and again in 1979 by Prentice-Hall. The publication is considered so valuable that even sells brand new copies (yes it is still in print) for $112.82 USD. If you shop around on you may still find a new copy for $51 to $159 USD. Used copies in good to very good condition may range from $5 to $20 USD with shipping costs of $4.00 USD when shipped in the continental United States.

In 1999 Constantine received the Jolt Award for Product Excellence, best book of 1999 for the publication co-authored with Lucy A. D. Lockwood entitled “Software for Use: A Practical Guide to the Models and Methods of Usage-Centered Design“. In 2001 he received the Platinum Award of Excellence, for “Performance-Centered Design Competition 2001: Siemens AG, STEP-7 Lite”. In 2006 he was recognized as a Distinguished Engineer by the Association for Computing Machinery, and in 2007 he was made a Fellow of the ACM. He is the 2009 recipient of the Stevens Award for “outstanding contributions to the literature or practice of methods for software and systems development.” He also received a Simon Rockower Award in 2011 from the American Jewish Press Association. Professor Constantine is a professional Member of the Industrial Designers Society of America; a member of the Usability Professionals’ Association and the IEEE Computer Society; and he is an Active Member of the Science Fiction and Fantasy Writers of America.

Professor Constantine is currently on the faculty of the Mathematics and Engineering Department at the University of Madeira Portugal, and is considered one of the pioneers of computing. He has been a professor at the University of Madeira Portugal since 2006, where he headed the Laboratory for Usage-centered Software Engineering, a research center dedicated to study the human aspects of modern software engineering. In 2010 he become an Institute Fellow at the Madeira Interactive Technologies Institute and is participating in a joint program between Carnegie Mellon University (CMU) and the University of Madeira. Recently, in collaboration with Lucy A. D. Lockwood, Professor Constantine has established the consulting firm of Constantine & Lockwood, Ltd.

With the maturing of CMMi V1.3, from the Software Engineering Institute its Core Process Areas of Configuration Management (CM);  Organizational Process Focus (OPF); Organizational Performance Management (OPM); Organizational Process Performance (OPP);  and Process and Product Quality Assurance (PPQA) provide a quantitative and qualitative testing paradigm.

Other CMMi Core Process Areas of importance are Requirements Management (RM); Organizational Process Definition (OPD); Organizational Process Focus (OPF); Organizational Training (OT); Measurement and Analysis (MA); Decision Analysis and Resolution (DAR); Project Planning (PP); Integrated Project Management (IPM); Quantitative Project Management (QPM); Organizational Process Performance (OPR); and Organizational Performance Management (OPM).

TechCrunch NYC 2012 DISRUPT Conference Sound Bites   Leave a comment

At the TechCrunch NYC 2012 DISRUPT Conference, Federal Government CIO Steven VanRoekel and CTO Todd Park discussed five new government technology initiatives under White House program named “The Presidential Innovation Fellows” which cleverly omits the term “Information Technology“.

In opening statements Federal CIO Steven VanRoekel made disparaging remarks about COBOL programmers and the COBOL programming language when he quipped,

“I’m recruiting COBOL developers, any out there?,” sending Federal CTO Todd Park into fits of laughter – see video “TechCrunch | Sound Bites” above. Federal CIO VanRoekel added: “Trust me, we still have it in the Federal government, which is quite, quite scary.”

CIO VanRoekel and CTO Park introduced  The Presidential Innovation Fellows as five new Federal Government technology initiatives named:

  1.  MyGov
  2.  Open Data Initiatives
  3.  Blue Button for America
  4.  RFP-EZ
  5.  The 20% Campaign

They are explained in more cryptic detail at the White House website for The Presidential Innovation Fellows“.

CIO VanRoekel also issued a warning that due to the proliferation of ‘.gov’ websites that the Federal Government was going to decree that no further ‘.gov’ URL’s would be issued. This is in clear violation of the rights of states, territories, counties, municipalities, cities, townships, and communities under the Constitution of the United States to provide services to their residents and constituents. The Federal CIO also insinuated that all government organizations both Federal and State would be forced or coerced into consolidating ‘.gov’ websites by Federal Legislation currently before Congress.

CIO VanRoekel comments were as subtle as “The Department of Homeland Surveillance” (Department of Homeland Security) and the Department of Justice’s “Federal Bureau of Intimidation” (Federal Bureau of Investigation).

Their objectives for MyGov, Open Data Initiatives, and Blue Button for America would be to open up Federal Government information to the public using personal health care records and student loan assistance information as clever examples.

The 20% Campaign would be to design a country-wide electronic monetary system all accessed with mobile smart devices (which could be as easily tracked by the Federal Government as credit and debit cards are today).

CTO Todd Parks remarks described a NOVUS ORDO SECLORUM.  He stated “The Presidential Innovation Fellows” as being  “forward thinking innovators  — the baddest a?s of the bad a?ses out there,”  supported by “bad a?s hackers”. Their objectives for MyGovOpen Data Initiatives, and Blue Button for America would be to open up Federal Government information to the public.

The 20% Campaign would be to design a country-wide electronic monetary system all accessed with mobile smart devices (which could be as easily tracked by the Federal Government as credit and debit cards are today).

These initiatives are part of the Digital Government strategy.

Strategy Objectives

The Digital Government Strategy sets out to accomplish three things:

  • Enable the American people and an increasingly mobile workforce to gain access to high-quality digital government information and services anywhere, anytime, on any device.

Operationalizing an information-centric model, we can architect our systems for interoperability and openness, modernize our content publication model, and deliver better, device-agnostic digital services at a lower cost.

  •  Ensure that as the government adjusts to this new digital world, we seize the opportunity to procure and manage devices, applications, and data in smart, secure and affordable ways.

 Learning from the previous transition of moving information and services online, we now have an opportunity to break free from the inefficient, costly, and fragmented practices of the past, build a sound governance structure for digital services, and do mobile “right” from the beginning.

  •  Unlock the power of government data to spur innovation across our Nation and improve the quality of services for the American people.

We must enable the public, entrepreneurs, and our own government programs to better leverage the rich wealth of federal data to pour into applications and services by ensuring that data is open and machine-readable by default.

For more information on the US Federal Government’s Digital Government  program sub-titled “Building a 21st Century Platform to Better Serve the American People”.

Just to give US Federal CTO Todd Park a heads up on his new US Federal GovernmentOpen Data Initiative” regarding health care record access; IBM has just been issued US Patent No. 8,185,411 for Method, System, and Apparatus for Patient Controlled Access of Medical Records(with security access protection).

This patent may be a direct contradiction by the United States Patent and Trademark Office regarding Federal CTO Todd Park’s comments about the private sector not providing new technologies of significant value for the benefit of the US citizens to get access to their personal health care records stored in the deep bowels of the Federal Government’s obviously antiquated information technology which most presumably run on IBM mainframes which US Federal CIO Steven VanRoekel communicated to the Information Technical Community as “Trust me, we still have it in the Federal government, which is quite, quite scary.”

Kudos to IBM for making top US Federal CIO and CTO officials choke on their own Techno-Babble!

Welcome to the “Land of the Free!” and the “Home of the Brave!”

 “The land of your tired, your poor,

Your huddled masses, yearning to breath free,

The wretched refuse of your teeming shores,

The homeless, tempest-tossed souls”

Welcome my friend to “The New World Order!”

2012 – Will Mark the Beginning of the Decade of the IBM Mainframe Skill Set Shortage   7 comments

US and EU companies are struggling to fill IBM Mainframe skill set shortages as existing staff retire and IT students discouraged from pursuing what appears to be an expensive and out-dated development environment with a high Total Cost of Ownership and Return on Investment. However IBM protests that recent IT Industry Research shows this to be untrue.

The Government of India and out-sourcing providers in India have targeted this IT Industry skill set shortage by establishing academic curriculum(s) for training students for certification and graduate and post-graduate degrees. India’s objective is to become the world’s largest mainframe outsource provider, rivaling even IBM, as well as becoming the largest H-1B, L-1, F-1, J-1, and M-1 Visa holder.

The Government of India is closely watching the IBM Academic Initiative’s efforts to establish University and College curriculum(s) for Information Technology and Computer Science graduate, post-graduate, and doctorate degrees in the United States. The Government of India plans to provide as many candidates as possible for US F-1, J-1, and M-1 Student Visas as the holders of these Visas become eligible for one of the 20,000 H-1B additional special visas offered annually for those students who have obtained a post-graduate or doctorate degrees here in the United States.

One would be foolish to think that China, Taiwan, the Philippines, Japan, Indonesia, Brazil, or Ireland will idly standby while India or IBM tries to dominate the mainframe world market. Recently, South America and Africa have woken up to the economic benefits of competing in this market as well.

The IBM Academic Initiative is working with Universities and Colleges to establish new IT and Computer Science graduate, post-graduate, and doctorate curriculum(s) for IBM Mainframe zEnterprise System z, System p, System i, and System x technologies.

It will be important for the US Federal Government to offer special educational grants for US student citizens to compete with the Government of India’s strategy to dominate the IBM Mainframe consulting, contracting, and outsourcing markets.

More importantly the IBM Academic Initiative must provide cost-effective IBM Global Services and IBM Training Courses to US Community Colleges to provide a groundswell of undergraduate degrees in Information Technology, Computer Programming, and Computer Science. These Associate Degrees would be tailored to be transferable to US Universities and Colleges for graduate, post-graduate, and doctorate degrees.

Community Colleges must be seeded with these curriculums to provide low-cost programs to residents throughout the United States, as most University and College programs are cost prohibitive requiring students to take out oppressive student loans which take too many years to repay.

The IBM Academic Initiative must also work with the Association for Career and Technical Education‘s educational and vocational institutes which provide various vocational accreditation, certification, and undergraduate, graduate, and post-graduate degrees in Information Technology and Computer Science.

Federal involvement is principally carried out through the Carl D. Perkins Career and Technical Education Act. Accountability requirements tied to the receipt of federal funds under this Act help provide some overall leadership. The Office of Vocational and Adult Education within the US Department of Education also supervises activities funded by the Act, along with grants to individual states and other local programs.

IaaS Provisioning Using the EIML Model   1 comment

After decades of doing systems architecture, integration, and delivery services and implementing Configuration Management and Disaster Recovery processes I have put together an Enterprise Information Management Logistics model which addresses the “issues listed above”.

They are not objectives. All planning to identify the objectives must start with a Business Process Model, (which should include a Workflow Process Model) for the Enterprise. The BPM must be supplemented with an Entity Relationship Model, and finally an Enterprise Information and Data Model. Only then can you begin to identify your Enterprise Information Systems Objectives. Once this is accomplished, you can use the EIML Model to deal with IaaS, PaaS, SaaS, AaaS, et al.

You should never sign an agreement with a PaaS provider who could lock you into specific IT technologies which may not provide you with the tools you need for your BPM. Part of the EIML model is to identify required IT technologies and platforms which support the tools and possibly the third-party application solutions you want to implement. You must completely understand your Enterprise Data Model to be able to determine what ETL, Data Integration, structured databases (whether they are ACID relational or NoSQL databases), the need for structured and unstructured data (possibly including Big Data technologies), data warehousing, and business analytic and intelligence tools.

The EIML model would help you determine your transaction processing requirements; Internet Web and Web Application Services technologies; networking, security, disk, clustering, virtualization, fail-over, backup and recovery, total overall processing and storage requirements, development tools, source code management, processing SDLC environment stacks, and even what vendors and providers can offer you the best solutions.

You may find the original post “Enterprise Information Management Logistics – EIML” from October of 2011 of this blog at the following link:

Enterprise Information Management Logistics – EIML

Enterprise Information Management Logistics Model

Enterprise Information Management Logistics Graphic Model

Enterprise Information Management Logistics Technologies Graphic Model

2012 – Cloud Computing Architecture the Mobile Smart Device Age   7 comments

The following quote is from an article addressing the possibility that mobile smart device world-wide sales will  surpass PC sales in 2012.

“In 2012, Gartner Group projects that worldwide PC sales will reach about 400 million units in 2012, while mobile smart phones will surpass 600 million units. Tablets will sell about 100 million units. That means that only about 35% of the new devices sold this year that will be connecting to the web will be Windows PCs. That’s how much the technology world has been turned on its head in just five years.”

We will be turning the corner on mobile smart computing this year. I believe the age of thin clients is on the verge of a major breakthrough. Mobile smart devices will become more popular than laptops, notebooks, and ultra-thins. I don’t see the All-In-One PC’s becoming dominate but will probably still see some deployment in the Enterprise. We are entering the age of mobile smart devices and Cloud Computing.

The traditional bullpen and pod workstation environments are going to morph into transient work space environments. With Cloud Computing and Collaborative Workflow and project development and management technologies supporting teleconferencing and telecommuting it is now possible to have geographically disbursed work groups, departments, project teams, divisions, or corporations. I’m not going to overlook the importance of balance in workplace community.

It is still very important to develop a project, departmental, group, division, and corporate culture. Some business models are going to continue to require that people be physically located together to provide product and services to customers. However, to varying degrees it will be possible for many typical office workers to telecommute, especially, if their company has embraced automated workflow management requiring the most minimal passing of materials through the workflow process.

For IBM zEnterprise z/OS, z/VM, z/VSE Operating Systems the primary languages will probably be COBOL, JAVA, Assembler, C/C++, PL/I, REXX, JCL, XML, and HTML for applications development using CICS, IMS, DB2/SQL, Oracle RDMS,  z/OS HTTP, and batch processing. For z/TPF it will probably stay mostly Assembler. For z/OS UNIX and IFL for System z Linux, as well as, zBX BladeCenter Power7 AIX and Linux the primary technologies will be Apache HTTP and nginx Server with Tomcat; WebSphere or Oracle WebLogic with node.js; and PHP, Python, Perl, Ruby, C/C++, SQL and PL/SQL.

IBM zBX BladeCenter System x Architecture consisting blade servers of AMD x86-64 Opteron 6-Core 32-bit & 64-bit Processors and Intel Xeon x86-64 processors support for Solaris x86, 64-bit Linux and Microsoft HPC Windows Server 2008 and Windows Server 8 platforms with Microsoft SQL Server 2008 or 2012 will probably be using shell programming languages, CGI, JAVA, HTML, CSS, JavaScript, node.js, Ruby, PHP, Python, Perl, C/C++, C#, VB, & SQL.

With the proliferation of smart devices (mobile smart phones and pads) it looks like GUI is going to stay thin client on OS interfaces and browsers. The primary browser client content will end up being HTML5, CSS3, JavaScript, PHP, .NET et al. I see CGI with XML and HTML5 with CSS3 Hyper-Text Markup Languages and PHP and Python languages with node.js moving more towards Internet and Web Server run-time environments on the proxies, cache, cloud and host servers. JavaScript is already moving to the cloud and host servers with a run-time environment as a plugin to internet and web servers using node.js. It looks like JSON java run-time environment is going to take off as well.

It also looks like HTML5 and CSS3 will make Adobe Flash and Microsoft Silverlight obsolete at some point in time. Adobe and Microsoft have announced they will discontinue these products in lieu of HTML5 and CSS3. The best cache, cloud, and host servers will be zEnterprise z114/z196 CPC’s with CP’s, zAAP‘s, zIIP‘s, IFL’s, and ICF’s with zBX BladeCenter using Power7 and AMD Opteron and Intel Xeon x86-64 multi-core processor blades with InfiniBand.

The value and productivity of the “green screen” or non-graphical or non-hypertext markup language terminal interfaces is not only not lost on the mainframe but I have found it of extreme value when administrating UNIX and Linux platforms. Even Microsoft has announced their next generation of Microsoft Windows Server platforms will provide the use of terminal interfaces as an alternative to GUI. The ability to use powerful command line syntax and scripting languages on terminals greatly enhances ones abilities to administrate and configure environments and also gives one a better understanding of the underlying technologies and their synergy to the IT systems engineers, programmers, and administrators.

Early in 2012 discussions regarding distributed file systems and NoSQL databases began to heat up. Now they are very hot. With the explosion of mobile smart devices and smart phones I foresee these technologies becoming the hot new technologies of 2013. With them will come a great demand for mobile smart device applications development. This will lead to an explosion in several technological areas.

Distributed computing technologies will now become the new paradigm. Server sprawl will become a serious problem for most large enterprises moving towards distributed computing technologies. A lack of standards, interfaces, migration and integration technologies; lack of network/NAS/SAN bandwidth and IPv6 compatible hardware and software; wireless networking; implementation of secured networking protocols; slow adoption of 64-bit application development, lack of effective Collaborative Application Lifecycle Development technology integration, bypassing effective System Configuration Management protocols will contribute to an increases in outages and security breaches.

Another issue which must be addressed is the proliferation of UNIX-like – essentially disparate Linux kernels and their unique or proprietary GUI and application development environments running on wireless mobile smart devices and phones, networking hardware, networking protocol services servers, and small, mid-range, large enterprise, super-computer, and massively parallel processing servers connected to the Internet.