Ray Pairan's Technology Page


My Technologies

Published:
Genre: Non-Fiction: Technology



Link to my former Pairan Technology site:
Link to my very early technologies:

My Non-Microservice Architecture

  • My Architecture - A Notes
  • No more VMs (Virtual Machines) - we are now in the VC (Virtual Container) era with the entire application in the Cloud
  • Kubernetes will remotely manage Docker clusters in any environment including the AWS Cloud
  • Docker
    • Docker registries - (DockerHub - default, Google Container Registry, Quay, AWS Container Registry)
    • Docker container's are instances of images
    • Docker images are immutable files that are snapshots of containers
    • Using default DockerHub
      • Push Docker image to DockerHub
        $ docker push <docker-hub-username/docker-image: tag-version>
        $ docker push raypairan/my-architecture-a-app:1.0
      • Pull Docker image from DockerHub
        $ docker run <docker-hub-username/docker-image: tag-version>
        $ docker run raypairan/my-architecture-a-app:1.0
  • WebSocket
    • Synchronous or asynchronous connection is established via an HTTP WebSocket endpoint
    • Connection remains active until either peer disconnects
    • Messaging can ensue between peers once connection is established
    • React client can use the JavaScript WebSocket API
    • Middle layer uses the Java WebSocket API of Java EE to expose endpoints
  • Cassandra for more traditional data and Elasticsearch for document centric needs are recommended and may be used together or separately.
  • AWS will automatically scale up and down
  • Cassandra and Elasticsearch will independently automatically scale up and down
  • My Architecture - A is not a microservice architecture design but instead conveys an alternate WebSocket implementation.

What is a Technology Architect?

Simply put, a Technology Architect is someone who is constantly surveying the technical landscape learning the more useful, relevant, and least technically bloated technologies while also utilizing an innate intuition that leads to design patterns that incorporate many of these technologies. This is a full-time job in the broad based technology environment of the 21st Century that cannot be a subsidiary engagement subsumed by additional in the weeds development responsibilities. Contrary to popular belief the Agile development process has not eliminated the Architect but invigorated this position with individuals who can constantly learn, adapt, and cooperate in a dynamically changing open-source landscape.

Do you want architectural artifacts for some or all of the ideas conveyed on my technology page?

With the need to survive in a capitalist society and a basic love of eating, any additional artifacts, knowledge, and/or creative ability of mine must be paid for either through providing my employer Infosys with a contract where I am the principle Architect. In the past all of my ideas have been freely conveyed with those benefiting from my 22+ years of knowledge and effort not even recognizing where the progenitor creations that they took credit for originated. Up to a point that is fine, but when these folks accelerate their careers on my sweat and toil only to leave me in obscurity pleading for a few scraps for sustenance - it is abundantly evident the free model has failed miserably and must end.
My Work Style Assessment

What is Agile?

Extreme planning, not fully engaging customers by asking them upfront for their user stories is not an Agile process – it is a want-to-be impostor.

Planning each week out for a customer is not Agile.

Having users or user-proxies work up user stories so you know what they want from a system standpoint is Agile.

A cooperative state of mind needs to exist between all stakeholders. Engage your customers and users not from in a Waterfall process top down hierarchical information stream but instead a flattened vertical pure team oriented Agile approach. Even in the initial stages of a project of any form (including a Proof-of-Concept) user stories information elicitation from the user is essential. A collaborative determination of what are the primary user specific desires that are the drivers of the project must be understood before presenting anything to the paying customer.

Any project where weekly planning is necessary is nothing but a 20th Century process that will waste time and yield failure. Real Agile shops do not engage in continuous fine-grained planning that subsumes the entire engagement in a culture where preparation, documentation, and architectural artifacts are preeminent over logical united action that kindles results that meet user system expectations in each and every iterative Sprint.

What is the burn-down or work remaining out of the backlog of user stories? Are you providing meaningful application functionality that is fully tested and production ready after each Sprint? This is what is important from an Agile process perspective – NOT megabytes of documentation that no one will read. Locate your Agile tool of choice like Jira and do not waste precious intellectual cycles on extreme planning for in a dynamically changing world plans are static slices of time that can never reflect reality.

A top down initial planning of everything in detail, having the customer sign-off on this plan, passing this plan with artifacts on to developers, and not fully requiring user or user-proxies to provide user stories – whatever you call it this top-down process is not Agile. Involving the users or user-proxies in non-technical discussion points of their user stories that can become more detailed as progress is made iteration after iteration fulfilling these user stories - is Agile.

This brings us back to the importance of having real users, not managers, or at least proxy users who can help you understand whether they even like the current systems that you are attempting to create or refactor. Having real users is an essential ingredient of meeting the expectations of your customer because ultimately the user is the real customer of any application not those who believe they fully understand what the user wants. If the user(s) were never asked their opinions of a system how can a manager or someone else know how the user(s) feel about a current application that is undergoing redesign or an entirely new proposed system? Never assume you understand anything about your user(s) until you ask them questions!

The following books are suggested reading material:
User Stories Applied: For Agile Software Development
Writing Effective Use Cases
Agile Project Management with Scrum

Recommended Technologies

React - Best ECMAScript 2015 modularized and componentized JavaScript library for creating user interfaces
NodeJS
Elastic Stack (ELK)
Kong - 21st Century API Platform (Manager and Gateway)
Istio - Service Mesh
Lua - Programming Language
Rust - Programming Language
GraphQL - An alternative to REST APIs that is a query language for APIs and data
Insomnia - This tool can make REST API and GraphQL requests
GraphiQL - Tool for testing GraphQL queries with a GraphQL API service layer
Cassandra - NOSql descentralized database that stores data in sorted multidimensional hash table
Elasticsearch - Distributed, scalable, real-time search and analytics engine
Docker
Kubernetes - Container Orchestration
Postman - Can be used for API testing
AWS - Best Cloud
    Option 2: GraphQL FaaS Using React UI & Relay
    Three-Tier Web Application Serverless GraphQL design that maps to a Lambda function on a cloud like AWS. Relay facilitates the connection between React components on the client side and the data retrieved from GraphQL services. This is a significantly simplier design pattern than Option 1: REST API Microservices FaaS.
  • React
  • Relay
  • GraphQL
  • AWS API Gateway
  • AWS CloudFront - CDN
  • AWS Certificate Manager
  • AWS S3
  • AWS Lambda
    Option 7: Event Triggered Parallel Processing in AWS FaaS
    Fan-Out Design Pattern Parallel Processing in an AWS Serverless environment can be triggered by an AWS S3 delta that fires off a Lambda function that starts multiple parallel processes. Some of the positive effects of this design pattern are reduced latency over what is possible with a single process plus all the benefits associated with the cloud like automatic scaling up and down without the need to maintain any infrastructure. Many more benefits are possible but not conveyed in this short synopsis.
  • AWS API Gateway
  • AWS CloudFront - CDN
  • AWS Certificate Manager
  • AWS S3
  • AWS Lambda
    Option 8: Event Triggered SNS Notification on Topic Parallel Processing in AWS FaaS
    SNS Fan-Out Design Pattern Parallel Processing in an AWS Serverless environment occurs when an SNS triggers multiple subscriber topic listener Lambda function workers. Some of the positive effects of this design pattern are reduced latency over what is possible with a single process, the elimination of the single entry Lambda function parallel process initiator in favor of multiple SNS subscriber Lambda worker functions, plus all the benefits associated with the cloud like automatic scaling up and down without the need to maintain any infrastructure. Many more benefits are possible but not conveyed in this short synopsis.
  • AWS API Gateway
  • Simple Notification Service (SNS)
  • AWS CloudFront - CDN
  • AWS Certificate Manager
  • AWS S3
  • AWS Lambda
Serverless Ecosystem - BaaS and/or Lambda Functions/FaaS
Serverless Framework - Manages, deploys, and controls resources on the cloud
HAProxy - Extremely fast load balancer that also supports path-based routing
Spring 5 - Projects list
Java EE 8 - Less densely complex than Spring MVC but you are still better off with a serverless microservices design on a cloud
git - Best Version Control
NotePad++ - Fantastic text editor
Jira - Agile planning tool

Fault-Tolerance - Deliberately Triggering Faults to Design Resilient Systems

The Netflix Simian Army
What is Simian Army?

Misc Informational Sources

Microservice Premium - Is a microservice architecture a good choice for the system you're working on? - Martin Fowler
Starbucks Does Not Use Two-Phase Commit - Gregor Hohpe
The Case for Shared Nothing - Michael Stonebraker
List of NoSQL Databases
CAP, Twelve Years Later: How the 'Rules' Have Changed - Eric Brewer
Manifesto for Agile Software Development
Principles of Object-Oriented Design
KISS Principle
Conway's Law - "How Do Committees Invent?"
Chebotko Diagrams - Used to model data in Cassandra

Recommended Technology Books

Antifragile - Things That Gain from Disorder
Author: Nassim Nicholas Taleb
November 2012
Antifragile Systems and Teams
Author: Dave Zwieback
April 2014
Just as human bones get stronger when subjected to stress and tension, and rumors or riots intensify when someone tries to repress them, many things in life benefit from stress, disorder, volatility, and turmoil. What Taleb has identified and calls “antifragile” is that category of things that not only gain from chaos but need it in order to survive and flourish. In The Black Swan, Taleb showed us that highly improbable and unpredictable events underlie almost everything about our world. In Antifragile, Taleb stands uncertainty on its head, making it desirable, even necessary, and proposes that things be built in an antifragile manner. The antifragile is beyond the resilient or robust. The resilient resists shocks and stays the same; the antifragile gets better and better. Furthermore, the antifragile is immune to prediction errors and protected from adverse events. Why is the city-state better than the nation-state, why is debt bad for you, and why is what we call “efficient” not efficient at all? Why do government responses and social policies protect the strong and hurt the weak? Why should you write your resignation letter before even starting on the job? How did the sinking of the Titanic save lives? The book spans innovation by trial and error, life decisions, politics, urban planning, war, personal finance, economic systems, and medicine. And throughout, in addition to the street wisdom of Fat Tony of Brooklyn, the voices and recipes of ancient wisdom, from Roman, Greek, Semitic, and medieval sources, are loud and clear. All complex computer systems eventually break, despite all of the heavy-handed, bureaucratic change-management processes we throw at them. But some systems are clearly more fragile than others, depending on how well they cope with stress. In this O’Reilly report, Dave Zwieback explains how the DevOps methodology can help make your system antifragile. Systems are fragile when organizations are unprepared to handle changing conditions. As generalists adept at several roles, DevOps practitioners adjust more easily to the fast pace of change. Rather than attempt to constrain volatility, DevOps embraces disorder, randomness, and impermanence to make systems even better. Why Etsy, Netflix, and other antifragile companies constantly introduce volatility to test and upgrade their systems How DevOps removes the schism between developers and operations, enlisting developers to deploy as well as build Using continual experimentation and minor failures to make critical adjustments—and discover breakthroughs How an overreliance on measurement and automation can make systems fragile Why sharing increases trust, collaboration, and tribal knowledge Download this free report and learn how the DevOps philosophy of Culture, Automation, Measurement, and Sharing makes use of changing conditions and even embarrassing mistakes to help improve your system—and your organization.
Learning Elastic Stack 6.0
Learning Elastic Stack 6.0
Author: Sharath Kumar M N, Pranav Shukla
December 2017
Site Reliability Engineering
Site Reliability Engineering
Author: Jennifer Petoff, Niall Richard Murphy, Chris Jones, Betsy Beyer
April 2016
Get valuable insights from your data by working with the different components of the Elastic stack such as Elasticsearch, Logstash, Kibana, X-Pack, and Beats. The Elastic Stack is a powerful combination of tools for distributed search, analytics, logging, and visualization of data from medium to massive data sets. The newly released Elastic Stack 6.0 brings new features and capabilities that empower users to find unique, actionable insights through these techniques. This book will give you a fundamental understanding of what the stack is all about, and how to use it efficiently to build powerful real-time data processing applications. After a quick overview of the newly introduced features in Elastic Stack 6.0, you'll learn how to set up the stack by installing the tools, and see their basic configurations. Then it shows you how to use Elasticsearch for distributed searching and analytics, along with Logstash for logging, and Kibana for data visualization. It also demonstrates the creation of custom plugins using Kibana and Beats. You'll find out about Elastic X-Pack, a useful extension for effective security and monitoring. We also provide useful tips on how to use the Elastic Cloud and deploy the Elastic Stack in production environments. The overwhelming majority of a software system’s lifespan is spent in use, not in design or implementation. So, why does conventional wisdom insist that software engineers focus primarily on the design and development of large-scale computing systems? In this collection of essays and articles, key members of Google’s Site Reliability Team explain how and why their commitment to the entire lifecycle has enabled the company to successfully build, deploy, monitor, and maintain some of the largest software systems in the world. You’ll learn the principles and practices that enable Google engineers to make systems more scalable, reliable, and efficient—lessons directly applicable to your organization.
Domain-Driven Design: Tackling Complexity in the Heart of Software
Domain-Driven Design
Author: Eric Evans
2004
Continuous Delivery and DevOps - A Quickstart Guide - Third Edition
Continuous Delivery and DevOps - A Quickstart Guide - Third Edition
Author: Paul Swartout
October 2018
Readers learn how to use a domain model to make a complex development effort more focused and dynamic. A core of best practices and standard patterns provides a common language for the development team. A shift in emphasis-refactoring not just the code but the model underlying the code-in combination with the frequent iterations of Agile development leads to deeper insight into domains and enhanced communication between domain expert and programmer. Domain-Driven Design then builds on this foundation, and addresses modeling and design for complex systems and larger organizations. Over the past few years, Continuous Delivery (CD) and DevOps have been in the spotlight in tech media, at conferences, and in boardrooms alike. Many articles and books have been written covering the technical aspects of CD and DevOps, yet the vast majority of the industry doesn’t fully understand what they actually are and how, if adopted correctly they can help organizations drastically change the way they deliver value. This book will help you figure out how CD and DevOps can help you to optimize, streamline, and improve the way you work to consistently deliver quality software. In this edition, you’ll be introduced to modern tools, techniques, and examples to help you understand what the adoption of CD and DevOps entails. It provides clear and concise insights in to what CD and DevOps are all about, how to go about both preparing for and adopting them, and what quantifiable value they bring. You will be guided through the various stages of adoption, the impact they will have on your business and those working within it, how to overcome common problems, and what to do once CD and DevOps have become truly embedded. Included within this book are some real-world examples, tricks, and tips that will help ease the adoption process and allow you to fully utilize the power of CD and DevOps.
Production-Ready Microservices
Production-Ready Microservices
Author: Susan J. Fowler
December 2016
Designing Data-Intensive Applications
Designing Data-Intensive Applications
Author: Martin Kleppmann
March 2017
One of the biggest challenges for organizations that have adopted microservice architecture is the lack of architectural, operational, and organizational standardization. After splitting a monolithic application or building a microservice ecosystem from scratch, many engineers are left wondering what’s next. In this practical book, author Susan Fowler presents a set of microservice standards in depth, drawing from her experience standardizing over a thousand microservices at Uber. You’ll learn how to design microservices that are stable, reliable, scalable, fault tolerant, performant, monitored, documented, and prepared for any catastrophe. Data is at the center of many challenges in system design today. Difficult issues need to be figured out, such as scalability, consistency, reliability, efficiency, and maintainability. In addition, we have an overwhelming variety of tools, including relational databases, NoSQL datastores, stream or batch processors, and message brokers. What are the right choices for your application? How do you make sense of all these buzzwords? In this practical and comprehensive guide, author Martin Kleppmann helps you navigate this diverse landscape by examining the pros and cons of various technologies for processing and storing data. Software keeps changing, but the fundamental principles remain the same. With this book, software engineers and architects will learn how to apply those ideas in practice, and how to make full use of data in modern applications.
Artificial Intelligence - Foundations of Computational Agents
Artificial Intelligence - Foundations of Computational Agents
Author: David L. Poole; Alan K. Mackworth
September 2017
Elasticsearch: The Definitive Guide
Elasticsearch: The Definitive Guide
Author: Clinton Gormley, Zachary Tong
January 2015
Artificial intelligence, including machine learning, has emerged as a transformational science and engineering discipline. Artificial Intelligence: Foundations of Computational Agents presents AI using a coherent framework to study the design of intelligent computational agents. By showing how the basic approaches fit into a multidimensional design space, readers learn the fundamentals without losing sight of the bigger picture. The new edition also features expanded coverage on machine learning material, as well as on the social and ethical consequences of AI and ML. The book balances theory and experiment, showing how to link them together, and develops the science of AI together with its engineering applications. Although structured as an undergraduate and graduate textbook, the book's straightforward, self-contained style will also appeal to an audience of professionals, researchers, and independent learners. The second edition is well-supported by strong pedagogical features and online resources to enhance student comprehension. Whether you need full-text search or real-time analytics of structured data—or both—the Elasticsearch distributed search engine is an ideal way to put your data to work. This practical guide not only shows you how to search, analyze, and explore data with Elasticsearch, but also helps you deal with the complexities of human language, geolocation, and relationships. If you’re a newcomer to both search and distributed systems, you’ll quickly learn how to integrate Elasticsearch into your application. More experienced users will pick up lots of advanced techniques. Throughout the book, you’ll follow a problem-based approach to learn why, when, and how to use Elasticsearch features.
Kubernetes: Up and Running
Kubernetes: Up and Running
Author: Brendan Burns, Kelsey Hightower, Joe Beda
September 2017
Agile Project Management With Scrum
Agile Project Management With Scrum
Author: Ken Schwaber
February 2004
Legend has it that Google deploys over two billion application containers a week. How’s that possible? Google revealed the secret through a project called Kubernetes, an open source cluster orchestrator (based on its internal Borg system) that radically simplifies the task of building, deploying, and maintaining scalable distributed systems in the cloud. This practical guide shows you how Kubernetes and container technology can help you achieve new levels of velocity, agility, reliability, and efficiency. Authors Kelsey Hightower, Brendan Burns, and Joe Beda—who’ve worked on Kubernetes at Google and other organizatons—explain how this system fits into the lifecycle of a distributed application. You will learn how to use tools and APIs to automate scalable distributed systems, whether it is for online services, machine-learning applications, or a cluster of Raspberry Pi computers. The rules and practices for Scrum-a simple process for managing complex projects-are few, straightforward, and easy to learn. But Scrum's simplicity itself-its lack of prescription-can be disarming, and new practitioners often find themselves reverting to old project management habits and tools and yielding lesser results. In this illuminating series of case studies, Scrum co-creator and evangelist Ken Schwaber identifies the real-world lessons—the successes and failures—culled from his years of experience coaching companies in agile project management. Through them, you'll understand how to use Scrum to solve complex problems and drive better results-delivering more valuable software faster.
User Stories Applied: For Agile Software Development
User Stories Applied
Author: Mike Cohn
March 2004
The Enterprise Path to Service Mesh Architectures
The Enterprise Path to Service Mesh Architectures
Author: Lee Calcote
October 2018
Thoroughly reviewed and eagerly anticipated by the agile community, User Stories Applied offers a requirements process that saves time, eliminates rework, and leads directly to better software. The best way to build software that meets users' needs is to begin with "user stories": simple, clear, brief descriptions of functionality that will be valuable to real users. In User Stories Applied, Mike Cohn provides you with a front-to-back blueprint for writing these user stories and weaving them into your development lifecycle. You'll learn what makes a great user story, and what makes a bad one. You'll discover practical ways to gather user stories, even when you can't speak with your users. Then, once you've compiled your user stories, Cohn shows how to organize them, prioritize them, and use them for planning, management, and testing. This free, complete, O’Reilly ebook shows how service meshes work & provides a path to help you build or convert applications. It explains how a service mesh provides a configurable infrastructure layer that makes service-to-service communication flexible, reliable, and fast. Whether you’re preparing to build microservice-architected, cloud-native applications or looking to modernize your existing set of application services, you may want to consider using a service mesh. The more services your enterprise manages, the more intense your headaches are likely to be. This practical ebook explains how a service mesh provides a configurable infrastructure layer that makes service-to-service communication flexible, reliable, and fast. Author Lee Calcote, Head of Technology Strategy at SolarWinds, demonstrates how service meshes work and provides a path to help you build or convert applications using this architecture. This ebook is ideal for developers, operators, architects, and IT leaders tasked with building distributed systems. You’ll learn how service meshes function with other technologies in your stack as well as how to overcome issues that may arise.
AWS Certified Solutions Architect
AWS Certified Solutions Architect - Official Study Guide
Author: Joe Baron, Hisham Baz, Tim Bixler, Biff Gaut, Kevin E. Kelly, Sean Senior, John Stamper
March 2018
Building Evolutionary Architectures
Building Evolutionary Architectures
Author: Neal Ford, Rebecca Parsons, Patrick Kua ffffffffffffffffffffffffffffffff
September 2017
This is your opportunity to take the next step in your career by expanding and validating your skills on the AWS cloud. AWS has been the frontrunner in cloud computing products and services, and the AWS Certified Solutions Architect Official Study Guide for the Associate exam will get you fully prepared through expert content, and real-world knowledge, key exam essentials, chapter review questions, access to Sybex's interactive online learning environment, and much more. The software development ecosystem is constantly changing, providing a constant stream of new tools, frameworks, techniques, and paradigms. Over the past few years, incremental developments in core engineering practices for software development have created the foundations for rethinking how architecture changes over time, along with ways to protect important architectural characteristics as it evolves. This practical guide ties those parts together with a new way to think about architecture and time.
Building Microservices
Building Microservices
Author: Sam Newman
February 2015
Release It
Release It: Design and Deploy Production-Ready Software
Author: Michael Nygard
January 2018
Distributed systems have become more fine-grained in the past 10 years, shifting from code-heavy monolithic applications to smaller, self-contained microservices. But developing these systems brings its own set of headaches. With lots of examples and practical advice, this book takes a holistic view of the topics that system architects and administrators must consider when building, managing, and evolving microservice architectures. A single dramatic software failure can cost a company millions of dollars—but can be avoided with simple changes to design and architecture. This new edition of the best-selling industry standard shows you how to create systems that run longer, with fewer failures, and recover better when bad things happen. New coverage includes DevOps, microservices, and cloud-native architecture. Stability antipatterns have grown to include systemic problems in large-scale systems. This is a must-have pragmatic guide to engineering for production systems.
Restful Web Services
Restful Web Services
Author: Leonard Richardson, Sam Ruby
December 2008
Rest In Practice
Rest In Practice: Hypermedia and Systems Architecture
Author: Savas Parastatidis, Jim Webber, Ian Robinson
September 2010
You've built web sites that can be used by humans. But can you also build web sites that are usable by machines? That's where the future lies, and that's what RESTful Web Services shows you how to do. The World Wide Web is the most popular distributed application in history, and Web services and mashups have turned it into a powerful distributed computing platform. But today's web service technologies have lost sight of the simplicity that made the Web successful. They don't work like the Web, and they're missing out on its advantages. Why don't typical enterprise projects go as smoothly as projects you develop for the Web? Does the REST architectural style really present a viable alternative for building distributed systems and enterprise-class applications? In this insightful book, three SOA experts provide a down-to-earth explanation of REST and demonstrate how you can develop simple and elegant distributed hypermedia systems by applying the Web's guiding principles to common enterprise computing problems. You'll learn techniques for implementing specific Web technologies and patterns to solve the needs of a typical company as it grows from modest beginnings to become a global enterprise.
Node.js Web Development - Fourth Edition
Node.js Web Development - Fourth Edition
Author: David Herron
May 2018
Java WebSocket Programming
Java WebSocket Programming
Author: Danny Coward
October 2013
Create real-time applications using Node.js 10, Docker, MySQL, MongoDB, and Socket.IO with this practical guide and go beyond the developer's laptop to cover live deployment, including HTTPS and hardened security. Node.js is a server-side JavaScript platform using an event-driven, non-blocking I/O model allowing users to build fast and scalable data-intensive applications running in real time. This book gives you an excellent starting point, bringing you straight to the heart of developing web applications with Node.js. You will progress from a rudimentary knowledge of JavaScript and server-side development to being able to create, maintain, deploy and test your own Node.js application.You will understand the importance of transitioning to functions that return Promise objects, and the difference between fs, fs/promises and fs-extra. With this book you'll learn how to use the HTTP Server and Client objects, data storage with both SQL and MongoDB databases, real-time applications with Socket.IO, mobile-first theming with Bootstrap, microservice deployment with Docker, authenticating against third-party services using OAuth, and use some well known tools to beef up security of Express 4.16 applications. Build dynamic enterprise Web applications that fully leverage state-of-the-art communication technologies. Written by the leading expert on Java WebSocket programming, this Oracle Press guide offers practical development strategies and detailed example applications. Java WebSocket Programming explains how to design client/server applications, incorporate full-duplex messaging, establish connections, create endpoints, handle path mapping, and secure data. You’ll also learn how to encrypt Web transmissions and enrich legacy applications with Java WebSocket.
Rust Quick Start Guide
Rust Quick Start Guide
Author: Daniel Arbuckle
October 2018
Kong: Becoming a King of API Gateways
Kong: Becoming a King of API Gateways
Author: Alex Kovalevych, Robert Buchanan, Daniel Lee, Chelsy Mooy, Xavier Bruhiere & Jose Ramon Huerga
April 2018
Rust is an emerging programming language applicable to areas such as embedded programming, network programming, system programming, and web development. This book will take you from the basics of Rust to a point where your code compiles and does what you intend it to do! This book starts with an introduction to Rust and how to get set for programming, including the rustup and cargo tools for managing a Rust installation and development workflow. Then you'll learn about the fundamentals of structuring a Rust program, such as functions, mutability, data structures, implementing behavior for types, and many more. You will also learn about concepts that Rust handles differently from most other languages. After understanding the Basics of Rust programming, you will learn about the core ideas, such as variable ownership, scope, lifetime, and borrowing. After these key ideas, you will explore making decisions in Rust based on data types by learning about match and if let expressions. After that, you'll work with different data types in Rust, and learn about memory management and smart pointers. An API Gateway is an essential component in microservice architecture. This book is useful for IT architects, DevOps engineers, CTOs and security experts willing to understand how to use Kong to create and expose APIs. Even if you are not already familiar with Kong, it will only take a few minutes to create your first API. Are you an architect interested in understanding how an API Gateway can simplify and improve security of a micorservices architecture? Are you a developer interested in knowing what you can do with Kong plugins, and how you can extend Kong with custom Lua plugins? Or are you an Ops / Sysadmin needing to know how to operate Kong in a multi-region environment? This book addresses these needs and more. Use an API gateway to simplify and improve the security of your microservices architecture. Write Kong plugins with Lua. Deploy Kong and Cassandra in a multi-region environment. Use load balancing features.
Cassandra: The Definitive Guide, 2nd Edition
Cassandra: The Definitive Guide, 2nd Edition
Author: Eben Hewitt, Jeff Carpenter
July 2016
Inviting Disaster: Lessons From the Edge of Technology
Inviting Disaster: Lessons From the Edge of Technology
Author: James R. Chiles
December 2001
Imagine what you could do if scalability wasn't a problem. With this hands-on guide, you’ll learn how the Cassandra database management system handles hundreds of terabytes of data while remaining highly available across multiple data centers. This expanded second edition—updated for Cassandra 3.0—provides the technical details and practical examples you need to put this database to work in a production environment. Authors Jeff Carpenter and Eben Hewitt demonstrate the advantages of Cassandra’s non-relational design, with special attention to data modeling. If you’re a developer, DBA, or application architect looking to solve a database scaling issue or future-proof your application, this guide helps you harness Cassandra’s speed and flexibility. Combining captivating storytelling with eye-opening findings, Inviting Disaster delves inside some of history's worst catastrophes in order to show how increasingly "smart" systems leave us wide open to human tragedy.Weaving a dramatic narrative that explains how breakdowns in these systems result in such disasters as the chain reaction crash of the Air France Concorde to the meltdown at the Chernobyl Nuclear Power Station, Chiles vividly demonstrates how the battle between man and machine may be escalating beyond manageable limits -- and why we all have a stake in its outcome.
Microservice Architecture
Microservice Architecture
Author: Mike Amundsen, Matt McLarty, Ronnie Mitra, Irakli Nadareishvili
August 2016
Lua Quick Start Guide
Lua Quick Start Guide
Author: Gabor Szauer
July 2018
Have you heard about the tremendous success Amazon and Netflix have had by switching to a microservice architecture? Are you wondering how this can benefit your company? Or are you skeptical about how it might work? If you’ve answered yes to any of these questions, this practical book will benefit you. You'll learn how to take advantage of the microservice architectural style for building systems, and learn from the experiences of others to adopt and execute this approach most successfully. Lua is a small, powerful and extendable scripting/programming language that can be used for learning to program, and writing games and applications, or as an embedded scripting language. There are many popular commercial projects that allow you to modify or extend them through Lua scripting, and this book will get you ready for that. This book is the easiest way to learn Lua. It introduces you to the basics of Lua and helps you to understand the problems it solves. You will work with the basic language features, the libraries Lua provides, and powerful topics such as object-oriented programming. Every aspect of programming in Lua, variables, data types, functions, tables, arrays and objects, is covered in sufficient detail for you to get started. You will also find out about Lua's module system and how to interface with the operating system.
Continuous API Management
Continuous API Management
Author: Mike Amundsen, Ronnie Mitra, Mehdi Medjaoui, Erik Wilde
November 2018
Docker: Up & Running
Docker: Up & Running
Author: Karl Matthias, Sean Kane
June 2015
A lot of work is required to release an API, but the effort doesn’t always pay off. Overplanning before an API matures is a wasted investment, while underplanning can lead to disaster. This practical guide provides maturity models for individual APIs and multi-API landscapes to help you invest the right human and company resources for the right maturity level at the right time. How do you balance the desire for agility and speed with the need for robust and scalable operations? Four experts from the API Academy show software architects, program directors, and product owners how to maximize the value of their APIs by managing them as products through a continuous life cycle. Docker is quickly changing the way that organizations are deploying software at scale. But understanding how Linux containers fit into your workflow—and getting the integration details right—are not trivial tasks. With this practical guide, you’ll learn how to use Docker to package your applications with all of their dependencies, and then test, ship, scale, and support your containers in production. Two Lead Site Reliability Engineers at New Relic share much of what they have learned from using Docker in production since shortly after its initial release. Their goal is to help you reap the benefits of this technology while avoiding the many setbacks they experienced.
Arduino Robotic Projects
Arduino Robotic Projects
Author: Richard Grimmett
August 2014
Beginning React
Beginning React
Author: Andrea Chiarelli
July 2018
This book is for anyone who has been curious about using Arduino to create robotic projects that were previously the domain of research labs of major universities or defense departments. Some programming background is useful, but if you know how to use a PC, you can, with the aid of the step-by-step instructions in this book, construct complex robotic projects that can roll, walk, swim, or fly. Arduino is an open source microcontroller, built on a single circuit board that is capable of receiving sensory input from the environment and controlling interactive physical objects. Arduino Robotic Projects starts with the fundamentals of turning on the basic hardware and then provides complete, step-by-step instructions that allow almost anyone to use this low-cost hardware platform. You'll build projects that can move using DC motors, walk using servo motors, and then add sensors to avoid barriers. You'll also learn how to add more complex navigational techniques such as GPRS so that your robot won't get lost. Projects like Angular and React are rapidly changing how development teams build and deploy web applications to production. In this book, you’ll learn the basics you need to get up and running with React and tackle real-world projects and challenges. It includes helpful guidance on how to consider key user requirements within the development process, and also shows you how to work with advanced concepts such as state management, data-binding, routing, and the popular component markup that is JSX. As you complete the included examples, you’ll find yourself well-equipped to move onto a real-world personal or professional frontend project.
Serverless Design Patterns and Best Practices
Serverless Design Patterns and Best Practices
Author: Brian Zambrano
April 2018
Learning GraphQL - Declarative Data Fetching for Modern Web Apps
Learning GraphQL
Author: Alex Banks, Eve Porcello
August 2018
Serverless applications handle many problems that developers face when running systems and servers. The serverless pay-per-invocation model can also result in drastic cost savings, contributing to its popularity. While it's simple to create a basic serverless application, it's critical to structure your software correctly to ensure it continues to succeed as it grows. Serverless Design Patterns and Best Practices presents patterns that can be adapted to run in a serverless environment. You will learn how to develop applications that are scalable, fault tolerant, and well-tested. The book begins with an introduction to the different design pattern categories available for serverless applications. You will learn the trade-offs between GraphQL and REST and how they fare regarding overall application design in a serverless ecosystem. The book will also show you how to migrate an existing API to a serverless backend using AWS API Gateway. You will learn how to build event-driven applications using queuing and streaming systems, such as AWS Simple Queuing Service (SQS) and AWS Kinesis. Patterns for data-intensive serverless application are also explained, including the lambda architecture and MapReduce. Why is GraphQL the most innovative technology for fetching data since Ajax? By providing a query language for your APIs and a runtime for fulfilling queries with your data, GraphQL presents a clear alternative to REST and ad hoc web service architectures. With this practical guide, Alex Banks and Eve Porcello deliver a clear learning path for frontend web developers, backend engineers, and project and product managers looking to get started with GraphQL. You’ll explore graph theory, the graph data structure, and GraphQL types before learning hands-on how to build a schema for a photo-sharing application. This book also introduces you to Apollo Client, a popular framework you can use to connect GraphQL to your user interface.
Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, Video Enhanced Edition
Continuous Delivery
Author: Jez Humble, David Farley
July 2010
The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA
The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA
Author: Diane Vaughan
January 2016
Getting software released to users is often a painful, risky, and time-consuming process. This groundbreaking new book sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers, and operations, delivery teams can get changes released in a matter of hours— sometimes even minutes–no matter what the size of a project or the complexity of its code base. Jez Humble and David Farley begin by presenting the foundations of a rapid, reliable, low-risk delivery process. Next, they introduce the “deployment pipeline,” an automated process for managing all changes, from check-in to release. Finally, they discuss the “ecosystem” needed to support continuous delivery, from infrastructure, data and configuration management to governance. The authors introduce state-of-the-art techniques, including automated infrastructure management and data migration, and the use of virtualization. For each, they review key issues, identify best practices, and demonstrate how to mitigate risks. When the Space Shuttle Challenger exploded on January 28, 1986, millions of Americans became bound together in a single, historic moment. Many still vividly remember exactly where they were and what they were doing when they heard about the tragedy. Diane Vaughan recreates the steps leading up to that fateful decision, contradicting conventional interpretations to prove that what occurred at NASA was not skullduggery or misconduct but a disastrous mistake. Why did NASA managers, who not only had all the information prior to the launch but also were warned against it, decide to proceed? In retelling how the decision unfolded through the eyes of the managers and the engineers, Vaughan uncovers an incremental descent into poor judgment, supported by a culture of high-risk technology. She reveals how and why NASA insiders, when repeatedly faced with evidence that something was wrong, normalized the deviance so that it became acceptable to them. In a new preface, Vaughan reveals the ramifications for this book and for her when a similar decision-making process brought down NASA's Space Shuttle Columbia in 2003.
TCP/IP Sockets In Java
TCP/IP Sockets In Java
Author: Kenneth Calvert Michael Donahoo
February 2008
Lean Software Development: An Agile Toolkit
Lean Software Development: An Agile Toolkit
Author: Mary Poppendieck, Tom Poppendieck
August 2003
The API (application programming interface) reference sections in each chapter, which describe the relevant parts of each class, have been replaced with (i) a summary section that lists the classes and methods used in the code, and (ii) a "gotchas" section that mentions nonobvious or poorly-documented aspects of the objects. In addition, the book covers several new classes and capabilities introduced in the last few revisions of the Java platform. New abstractions to be covered include NetworkInterface, InterfaceAddress, Inet4/6Address, SocketAddress/InetSocketAddress, Executor, and others; extended access to low-level network information; support for IPv6; more complete access to socket options; and scalable I/O. The example code is also modified to take advantage of new language features such as annotations, enumerations, as well as generics and implicit iterators where appropriate. In Lean Software Development, Mary and Tom Poppendieck identify seven fundamental "lean" principles, adapt them for the world of software development, and show how they can serve as the foundation for agile development approaches that work. Along the way, they introduce 22 "thinking tools" that can help you customize the right agile practices for any environment. Better, cheaper, faster software development. You can have all three–if you adopt the same lean principles that have already revolutionized manufacturing, logistics and product development.
A Concise Guide to Microservices for Executive (Now for DevOps too!)
A Concise Guide to Microservices for Executive (Now for DevOps too!)
Author: Alasdair Gilchrist
September 2018
Learn to Program with C++ - First Edition
Learn to Program with C++ - First Edition
Author: John Smiley
October 2002
Organizations that have successfully laid a foundation for continuous innovation and agility have adopted microservice architectures to respond rapidly to the demands of their business. Microservices are the evolution of best-practice architectural principles that shape the delivery of solutions to the business in the form of services. All businesses, no matter what industry they are in, must strive to deliver the ideal customer experience, as customers are more demanding than ever and will abandon a business that is too slow to respond. A microservice architecture aligns with the business in such a way that changes to your business can be dealt with in an agile fashion. The ease and speed with which your company can change will determine your ability to react to trends in your industry to remain competitive. In this updated 2nd edition we take a high-level approach to describing the microservice architecture and how that aligns with the organisation's business goals. We describe the microservice patterns, and the pros and cons of when and where they should be deployed, which provide you with a good overall education in this new development paradigm. However, in this updated edition we go much further, as we take a deeper dive into microservice design, implementation and the nuances of networking and monitoring. We discuss preferred infrastructure models and connectivity protocols as well as contemplate several use-cases for microservices such as micro-front-ends, the IoT and GDPR. Finally, we close with an extensive summary of the main takeaways from this the 2nd edition of 'A concise guide to Microservices for Executives - and now DevOps too!' Join Professor Smiley's C++ class as he teaches essential skills in programming, coding, and more. Using a student-instructor conversational format, this book starts at the very beginning with crucial programming fundamentals. You'll quickly learn how to identify customer needs so that you can create an application that achieves programming objectives--just like experienced programmers. By identifying clear client goals, you'll learn important programming basics--like how computers view input and execute output based on the information they are given--then use those skills to develop real-world applications. Participate in this one-of-a-kind classroom experience and see why Professor Smiley is renowned for making learning fun and easy.
  • Learn fundamental programming concepts, which can be applied to multiple languages
  • Develop your C++ skills with real-world, hands-on programming projects
  • Work with program variables, constants, and C++ data types
  • Create and run a C++ program using Windows Notepad
  • Adapt to runtime conditions with selection structures and statements
  • Use loops to increase your programming power
  • Learn about pointers, arrays, objects, classes, and more
Restlet in Action - Developing RESTful web APIs in Java
Restlet in Action - Developing RESTful web APIs in Java
Authors: Jerome Louvel, Thierry Templier, and Thierry Boileau
September 2012
Restlet in Action gets you started with the Restlet Framework and the REST architecture style. You'll create and deploy applications in record time while learning to use popular RESTful Web APIs effectively. This book looks at the many aspects of web development, on both the server and client side, along with cloud computing, mobile Android devices, and Semantic Web applications.

Natural Ecosystem Replicated In Overly Complex Technology

"The number of errors in code correlates strongly with the amount of code and the complexity of the code."
- Bjarne Stroustrup
A Tour of C++, 2nd edition
Posted on:
Author: Ray Pairan Jr.

Natural ecosystems like the biosphere are sophisticated interconnected complex dynamic systems where slight changes in any of the member elements can ripple across the whole organized chaotic sub-structures precipitating a series of uncontrolled and typically unknown events. Balance in natural systems is achieved over eons of transmutations and shorter term transitions moving towards symbiotic sustainable state changes that tend to keep the entire ecosystem stable. Complex technical systems, those that human beings design are replicas of the natural ecosystems we come in contact with daily. These numerous sub-system marvels with layers of open-source abstractions and thousands of service and micro-service interfaces have the same vulnerabilities that plague all complex dynamic systems; system destabilization triggered by sub-system singularity deviations from expected behavior sets.

There is always the danger when adding layers of abstraction to systems that we will create overly complex dynamic systems of the same brittleness as natural systems. Every interface or connection point is a weak link in a chain of relative complexity. A network, either natural (like the air, ocean, lithosphere, or biosphere) or human (like the Internet) can amplify an errant elemental sub-system connection or interface instability throughout an entire system superstructure.

Well designed complex technical systems where the bulkhead design pattern has been implemented can inhibit errors in sub-systems from rippling across and destabilizing the larger system by limiting the divergence of the entire system from its normal complex dynamic system homeostasis state. Another way to reduce undesirable system superstructure aberrations outside normal operating parameters is to build systems that have fewer layers of abstraction, limited complexity, and less interconnected dependencies.

Steps should always be taken to actively reduce complexity, the layers of abstractions, and the number of dependencies across all elements of a system superstructure. Technologists have the responsibility to embrace a mindset of constant simplification of the system architecture and code-base - for complexity begets instability – the bizarre destabilized disturbances that travel unforeseeable pathways.


What is DevOps?

Posted on:
Author: Ray Pairan Jr.
  • Development working closely with Operations in a non-siloed cooperative Agile environment
  • Embracing Automation
    • Source Code Control (Git, SVN, ...)
    • Build (Mavin, Gradle, npm, ...)
    • Test Automation (JUnit, Postman, Newman, JFrog, Selenium, Cucumber, Gherkin, ...)
    • Deployment (Ansible, Vagrant, Docker, Chef, Puppet, ...)
    • Continuous Integration (CI) & Continuous Deployment (CD)
      • CI Servers (Jenkins, Hudson, ...)
    • Monitoring (Logstash, Kibana, Spring Actuator, ...)
    • Application Performance Management (APM)
      • Tools (New Relic, App Dynamics, Dynatrace, DataDog, ...)
    • Code Analysis (Sonatype, Jacoco, PMD, FindBugs, ...)

AWS Database Overview

Posted on:
Author: Ray Pairan Jr.

Not included in the below synopsis of AWS database features is an elaboration on Amazon Redshift or Amazon DynamoDB.

Primary Instance: Is the main instance supporting both read and write workloads. When modifying data, the change occurs on the primary instance. Each Amazon Aurora DB cluster has one primary instance.

Databases

  • RDBMS – Amazon Relational Database (RDS)
  • NOSQL – Amazon DynamoDB
  • Warehouse – Amazon Redshift

Amazon EC2

  • Any database engine can be run within Amazon EC2 instances, but you must handle the installation and administration
  • Amazon RDS Oracle and Microsoft SQL Server are their own unique products that require appropriate licenses to operate in AWS. Bringing over your own Enterprise Edition or other license to AWS is acceptable.

Amazon RDS

  • Supports six RDBMS database engines: Oracle, Amazon Aurora, PostgreSQL, MS SQL Server, MariaDB, MySQL Server
  • Each Amazon RDS Instance can scale up to 16TB
    • Storage expansion is supported for all the database engines except SQL Server
  • OS through Remote Desktop Protocol (RDP) or SSH is unavailable
  • Pricing
  • Security
    • Use AWS Identity and Access Management (IAM) policies
    • Deploy Amazon RDS DB instances into private subnet within an Amazon VPC>
    • Restricted with network Access Control Lists (ACLs) and security groups
    • Standard database in-transit and at-rest encryption
    • Use AWS Identity and Access Management (IAM) policies
    • At-rest encryption occurs via Amazon Key Management Service (KMS) or Transparent Data Encryption (TDE)
  • Scaling
    • Horizontal scaling
      • Does not require sharding across multiple database instances
      • Off load read transactions from primary instance onto read replica instances
      • Read replicas only supported in MySQL, PostgreSQL, MariaDB, & Amazon Aurora
      • NoSQL databases like Amazon DynamoDB are designed to scale horizontally
      • MySQL, PostgreSQL, and MariaDB can have up to 5 read replicas
      • Amazon Aurora can have up to 15 read replicas
      • ELB load balancer does not support routing of traffic to RDS instances
    • Vertical scaling
      • 18 instance sizes can be chosen from when resizing RDS MySQL, PostgreSQL, MariaDB, Oracle, or Microsoft SQL Server instances
      • 5 memory-optimized instance sizes
  • Multi-AZ deployments is used only for disaster recovery
    • Creates a database cluster across Availability Zones (AZ)
    • Place copy of database in another AZ for disaster recovery purposes
    • Synchronously replicates from primary to secondary instance on another AZ
    • DNS name remains the same
    • CNAME changes to point to the standby secondary instance
    • Fail-over points existing database endpoint to a new IP address – no need to change connection string manually
    • Fail-over within 1 to 2 minutes
  • Improve database performance
    • Use read replicas to increase read performance
      • Only supported for MySQL, PostgreSQL, and Amazon Aurora
    • Use Amazon ElasticCache
  • Backups 2 Types
    • Automated
    • Manual DB Snapshots
  • Recovery
    • Recovery Point Objective (RPO) defined by organization
      • Maximum period of data loss acceptable in event of failure
      • Measured in minutes
    • Recovery Time Objective (RTO) defined by organization
      • Maximum downtime permitted to recover from backup and resume processing
      • Measured in hours or days
  • Elastic Block Store (Amazon EBS)
    • New Elastic Volumes
    • Provisioned IOPS (SSD) storage
      • Up to 40,000 IOPS
      • Scales up to 16TB
      • I/O intensive workloads
    • General Purpose (SSD) gp2
      • Up to 40,000 IOPS
      • Scales up to 16TB
      • Provides burst performance
      • For small to medium sized databases
    • Magnetic
      • Standard storage
  • Amazon Aurora: based upon MySQL with internal components geared more to service-orientation.
    • Create DB cluster
    • Amazon Aurora cluster volumes span multiple Availability Zones
    • Aurora has 2 instance types
      • Primary
      • Replica
    • Each cluster has 1 primary instance
    • Each cluster can have up to 15 Replica instances
    • Automatic fail-over from one Availability Zone occurs within 1 to 2 minutes

Steps to Connect to WPA2 WIFI Network in Linux Ubuntu

Posted on:
Author: Ray Pairan Jr.

After successfully restoring my Linux Ubuntu 14.04 LTS OS in one of my partitions of my dual boot configured laptop and during this process needing to get a connection via the command line below are my vetted steps to link to a WPA2 WIFI network.

Make certain you are in Root and you know the name of your interface. On personal computers it is typically wlan0.

  • Turn on wireless card
    • $ifconfig wlan0 up

  • Turn on interface
    • $ip link set wlan0 up

  • Scan for wireless networks and locate the network's SSID you want to connect to and ensure that it is up
    • $iw wlan0 scan | less

  • Create wpa-supplicant.conf file with password for the SSID.
    NOTE: If you have already created this config file just skip this step
    • $wpa_passphrase 'Name of WIFI network/hotspot' 'Password for WIFI network/hotspot' > /etc/wpa_supplicant.conf

  • Display the wpa_supplicant.conf file to verify the name and password are correct
    • $cat /etc/wpa_supplicant.conf

  • Connect to WIFI network/hotspot
    NOTE: The default driver is n180211 so there is no need to specify the driver. If the connection fails you will need to determine what driver you should be using but first try wext which is an all-purpose driver such that the new command is:
    $wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant.conf -D wext
    • $wpa_supplicant -B -i wlan0 -c /etc/wpa_supplicant.conf

  • Test the connection
    • $iw wlan0 link

  • Turn on dhclient
    • $dhclient wlan0

  • Ping to determine if you can reach the Internet
    • $ping -c 4 google.com

The above is a description of the basics to connect to a WPA2 WIFI network using the Linux Ubuntu command line. Obviously, there are many verification steps that have been omitted to keep this step-by-step process succinct.


Evolutionary Approach To Continuous Integration

Posted on:
Author: Ray Pairan Jr.

Tools like Bamboo are used for release automation (auto-deploy) to move builds from environments like development, QA, and finally to production in Blue pre-delta PROD or Green post-delta PROD. The entire continuous integration process needs to be fused together in a coherent and logical sequence of events utilizing various tools.

Separate the build and run stages such that there are well defined build, release, and run processes. Keep DEV, staging, and PROD as similar as possible to ensure DEV/PROD parity.

In conjunction with utilizing release automation tools the entire continuous integration process also uses source control tools like SVN or Git/GitBucket with Sonar, Blackduck, and possibly Vericode to check the code throughout the integration phases for code quality standards and security requirements adherence.

Artifactory should also be integrated into the continuous integration process to ensure that only pre-authorized open-source and non-open source code libraries and other dependencies are pulled to DEV and utilized during the development phase across iterations.

Most QA testing including Fuzz-Testing should be automated in the continuous integration process build pipeline. Also, running outbound generative tests against service dependencies of an application making them act the way you expect can also ensure that anomalies with micro-services are isolated in the build pipeline instead of when the system is migrated to PROD.

Suffice to say, looking only at the end state PROD ON or PROD OFF deployment in a release process is probably an anti-pattern in the creation of production ready software. Unfortunately, the entire macro continuous integration process must be analyzed when even just moving to an end state PROD strategy like BLUE/GREEN.

Architectural requirements can even be incorporated into a continuous integration process to ensure the architectural fitness of the system. Tools like JDepend and NDepend exist to test the architectural fitness of an application during the build process. Rules defined to protect the architectural dimensions execute each time the system changes.

The end goal is to optimize our systems for a hostile and turbulent world instead of an idealized environment. Architect and design-in failure modes that will protect the most critical parts of the application because with certainty all software fails some more adversely than others. Systems with many integration points across multiple dependencies, those that are highly complex, tightly coupled, and without failure bulkheads to prevent breakdown propogation have a higher probability of failure than lean, minimalist, loosely-coupled, and less-complex applications. Each dependency or needless complexity that a system has raises the risk for a cascading failure. So just because you have an out-of-sight and out-of-mind usecase philosophy when utilizing opensource and non-opensource libraries does not mean reality will not strike, bringing your system to it's knees.


Software Ecosystem Should Be Minimalist

Posted on:
Author: Ray Pairan Jr.

Just read an article on “The Guardian” entitled: Franken-algorithms: the deadly consequences of unpredictable code, that among other interesting insights pretty much agrees with my theory that we should write lean, predictable, and understandable code that we can generally ensure expected outcomes that are the ones our customers and stakeholders desire.

Layer upon layer of libraries and open-source code that has not been fully vetted for code quality to generally accepted standards and most certainly has not been tested to the extent that we would test project code at the highest layer of abstraction for our customers could be a source of unacceptable uncertainty with potential negative consequences

At what point in layers of abstraction does an application become brittle and tightly coupled to code we may not have any control over?

None of what I am inferring suggests that we should throw out open-source especially since this egalitarian form of software development embraces the best in humankind’s desire to build resilient communities based on cooperation. Only that we should be very careful in how many layers we pile on-top of what may potentially become ‘spaghetti-code” that is so convoluted it’s state transitions and ending states become unknown.

Just wanted to share the article because far too often when we are flying at warp speed through projects it is easy to lose sight of some of the basic tenets of software engineering.


XML Deserves Another Chance

Posted on:
Author: Ray Pairan Jr.

There is nothing de facto requiring that all RESTful services use JSON to transfer data. Furthermore, JSON requires that the application a priori understand the data structure and semantics prior to parsing a JSON document. Once again, the technology community has thrown out a perfectly good technology, labeled this technology deficient (in the case of XML - verbose), and had those who were ill-informed about the entire capabilities of the technology drive the purging of this new technology heresy.

First we throw out DTDs that were a very simple way of describing an XML document to an external system. Then the technology community decided that the overly complex XML Schema XSDs would better describe an XML document to an external application. In the process those of us in the forefront of developing the XML standards back in the late 1990’s just scratched our heads in wonder at how a very simple and complete technology could be so convoluted by those who knew so little about the underpinning of this document structure.

Go back to the basics using a simple but complete DTD XML document definition (that is if you can find a parser able to parse this definition) otherwise just use an XSD bare bones. You do not need to pollute the application layer with code that needs to comprehend the definitional characteristics of a document like JSON. An XML DTD or XSD can already do this admirably. In fact, DTDs and XSDs for different XML document type payloads can be agreed upon across an industry or within any organizational structure – branch your code to handle any document types you need to be made aware of for the application processes. When you receive a valid document that conforms to the DTD or XSD your code can use the many abstraction layers for parsing that already exist in various languages to parse the document of a specified type. If you are a fan of object oriented languages create a single Abstract class with method signatures and method implementations and sub-class from this abstraction for the various type of documents you need to build in your document library. Much, much more can be done that has been lost over time that needs to be relearned.

Furthermore, there is all this talk of how we need to pass semantics across the wire to various disparate systems to build a firmer AI substrate. Once again, those leading the charge to toss out XML never really understood it in the first place. XML attributes (now very rarely ever discussed – mostly forgotten) were always intended to carry the semantic meaning of each node across the parent child relationship to create a semantic map of the entire structure.

So before we get on the bandwagon to ride the next bright and flashy shinning trinket into oblivion we should first more fully understand the technology we are trying to supplant.


HATEOAS Web Service Concepts from An Implementation Standpoint

Posted on:
Author: Ray Pairan Jr.

This document presents an implementation strategy for Hypermedia as the Engine of Application State (HATEOAS). It addresses the alignment with HATEOAS, a Micro-Service architecture, and follows bounded context/service domains.

Concepts

  • Link relations expose application transition states
  • Links advertise legitimate application state transitions
  • On each iteration, the service and consumer exchange representations of resource state, not application state
  • Retrieving info via GET provides resource state representation
  • Change web application state via POST

Questions

  • Are there any benefits in using WebSockets that create a TCP connection to stream data?
  • Are we able to monitor and track service processes across context boundaries?
  • How does caching affect the successful implementation of HATEOAS web services?
    • Does this mean that a HATEOAS web service design pattern will always be…?
      Cache-Control: no-store
    • Cache-Control headers could be added by the server prior to sending the response to the client.
    • Servers can enforce fresh state transitions that get sent by responses to the client by eliminating caching for a select few responses that get passed back to clients that need to know the current application state.
    • Weak consistency is an inherent feature of the web due to its stateless design so maintaining state in a distributed application that is loosely tied web services that coalesce to exhibit behaviors of a cohesive system is challenging.

Potential Goals

  • Avoid database integration at all costs … dependencies between the web service API and underlying data sources should be minimized
  • The purpose of REST should be to model entities
  • Prefer choreography over orchestration
  • UIs viewed more as compositionally layered
  • Services should be highly cohesive and loosely coupled
  • Keep middleware dumb and end-points smart
  • Services should cleanly align to bounded contexts
  • Service implementation should relate to resource life cycles NOT the application protocol life cycle
  • Keep services and micro-services technology-agnostic
  • Move toward resource-oriented hypermedia-driven distributed applications
  • Use hypermedia that conveys both business data and the information necessary to drive a protocol specific to the business domain
  • Use hypermedia to model state transitions and describe business protocols
  • Always return from the server in the response the current state of the resource like...
    Status: processing
  • Synchronize state using ETag number in subsequent conditional requests to prevent race conditions Use If-Match (in header) or If-None-Match (in header) --Conditional Request Headers
  • Minimize Performance Web Service Antipatterns like: Chatty I/O and Extraneous Fetching
  • Decouple, decouple, decouple...
  • When possible keep resource URIs no more complex than /collection/item/collection
  • Use standard web service patterns for POST, GET, PUT, and DELETE
  • Long running processes should use an asynchronous process that can be polled using a GET request to the endpoint to know when the process has completed.

State Maintenance and Transitioning
Maintaining state across web service transitions may not be necessary, we may just want to provide clients with all the relevant web service links that they can analyze for next step or interesting web services that they would like to utilize.

  • Workflows or process flows are mapped via web service resource calls
  • Process/Workflows are choreographed
  • Each web service resource activity/process is aware of next step or relevantly interesting web service calls given the current state of the resources.
  • Advertising the next steps/processes in the protocol are accomplished by embedding hypermedia controls sent back to clients.
  • Server-side activities/actions/triggers that initiate a state transition are advertised through link relations
  • Based upon what is returned from the server the client can make informed decisions on the next course of action(s).
  • In some cases, the client discovers what the next course of action should be based upon its historical progression through the application and its desired foreseeable state changes that relate to perceived positive outcomes
  • Suggested patterns/paths of client usage may be conveyed by the server response but the client is free to take the trail less traveled in a quest of discovery.
    • State transitions via called web services dynamically evolve the transitional state changes and their resulting final application state after user interaction has ended.
  • Actions/triggers are choreographed to the client along with the current application state
    • Even more helpful in helping the client ascertain what state transition trigger to execute would be to include the resulting state of a web service link selection

Deployment, Build, and Run

  • Host a single VM in Hashicorp Vagrant that sets up, tears down, and runs a Docker instance
    • Reduction of single-point web service errors
    • One web service failing only impacts the single service
    • Easier to scale a single service that is independent of other web services
    • Security can be micro-focused only on those web services that require it
    • Single-web service per host/container design pattern
  • OR… PaaS on the Cloud may work but it is still in its infancy for web service deployments
    • Any application that deviates from the average system will not autoscale appropriately using the canned heuristics of the PaaS.
  • Automate the entire deployment process to reduce the inherent complexity of deploying web services.
  • Each web service is a Docker application that runs within its own container
  • Build process creates Docker applications (containers) that stores them in the Docker registry
  • Use Kubernetes to select a Docker container/web service to run
  • Hashicorp Terraform used to “Write, Plan, and Create Infrastructure as Code”
  • Each web service should have its own Continuous Integration (CI) build process
  • Use blue/green deployment having only a single web service operational the other newly deployed instance is tested in situ.
  • Version control all configuration processes

Testing

  • Across each web service CI build pipeline: Build, Unit Tests, and Service Tests all pipelines resolve to an End-to-End set of tests.
  • Anytime code is checked into source control for any web service the automated build is triggered and the across web service CI pipelines are activated just before the End-to-End tests.
  • Run the End-to-End tests periodically but not at the same frequency as each code change CI pipeline activation.
  • Make certain that each test is deterministic – resulting in behavior consistent with expectations.
  • Only test what needs to be tested
  • Using blue/green deployment test any newly deployed web service in situ – once successfully tested the newly deployed web service replaces the running web service as the new production version.
  • Test to ensure that non-functional requirements are being met: acceptable latency of web pages, the number of users the system needs to support, and other subsidiary requirements with a focus on the impact of new web services.
  • Across web service system level testing also needs to be done in conjunction with standard and non-functional testing.
  • Use the currently deployed production system as the baseline to compare your performance and non-functional requirements against for both the aggregate web services and any newly created web services.

Monitoring

  • Use Graphite for system and individual web service level metrics collection
  • Nginx and Varnish will expose other useful metrics like cache hit rates and response times.
  • Across web service system historical data is crucial in determining if a system is misbehaving – operating at less than optimal specifications derived from a long-running analysis.
  • Call-chain monitoring in logs can be handled across web service and system boundaries by using correlation IDs in the headers.

Scaling and Limiting & Handling Failure

  • Up and down on demand scaling accomplished in the Cloud using infrastructure-as-a-service (IaaS)
  • Distribute web services across AWS availability zones (AZs) – inside each region (sub-cloud)
  • Implement bulkheads with Hystrix to refuse requests under conditions of extreme saturation of resource (load shredding) so resources do not get saturated to the point of overwhelming the entire system.
  • Distribute web services across AWS availability zones (AZs) – inside each region (sub-cloud)
  • Attempt to isolate web services so they are not dependent upon other web services.
  • Run several instances of each web service behind a load balancer
  • Do not invest time in dealing with scale that may never transpire – this is wasted effort and time best utilized elsewhere.
  • Copy the data in a database used by web services from the primary node to other node replicas.
  • Increased database write volumes can be handled by sharding.
  • Mongo and Cassandra offer new scaling models
  • Depending upon load, data freshness needs, and the responsiveness of the overall distributed web service based system opting to have various types of caching may be desired.

Security

  • Use SAML or OpenID Connect (an implementation of OAuth 2.0)
  • OpenAM and Gluu are identity providers for OpenID Connect
  • OpenID Certified providers
  • Active Directory is the preeminent identity provider for SAML
  • SSO gateway is a proxy that exposes the web services to the outside world
  • API keys used for web service-to-service messaging
  • Sensitive data residing even in what is deemed a secure parameter should be encrypted using a programming language (like Java or #C) implementation of AES-256.
  • Store encryption keys in a separate key vault.
  • Always decrypt data on demand and never wholesale.
  • Practice “Defense in Depth” by securing multiple layers of a system and infrastructure.
  • Utilize a sophisticated firewall like ModSecurity.
  • Cull sensitive data from logs that could be used by hackers trying to gather tidbits of information to find points of vulnerability.
  • Do not store inessential data that could be stolen resulting in significant embarrassment and loss to the business.

Team Size

  • Small teams no larger than the Amazon 2-pizza size gauge
  • Team that takes ownership of the entire development process from sourcing requirements, the build process, testing, deployment, and maintenance
  • A few microservices are owned by the small team

Service Discovery

  • Use Consul for service discovery – has a DNS server and supports SRV records providing an IP and port for a name.
  • Endpoint capability discovery could be handled by using HAL

Overview of Java EE 8

Posted on:
Author: Ray Pairan Jr.

The end of Java is not near, especially for the polyglot. You want to use JSF extended by Primefaces, Richfaces, or Icefaces with a middle layer of CDI Managed Beans using application server auto generated JPA entities mapped to an RDBMS like Oracle - go right ahead and design this monolithic segment of your system. JPA entities can still be mapped to DB Views that call complex SQL queries or straight DB tables exposing data-points that web pages and backend methods utilize.

Wait, you can even use Skinny WAR file builds (no more than kilobytes in size) in conjunction with many databases on multiple servers either RDBMS or NoSQL (like MongoDB). The design patterns you use can be many and diverse so that you have the flexibility to scale both horizontally and vertically without dead-end alleys blocking your way.

But why stop there, with Java EE 8 you can use Angular, AngularJS, plain old JavaScript, JQuery, JSTL, EL, CSS3, and any flavor of HTML even HTML5 on the client-side.

So you say you need SOA particularly microservices - JAXRS using the JSON-B and the new JSON-P packages that are included in Java EE 8 make development of RESTful web services a breeze. You want to containerize your web services than why not use Docker coupled with Kubernetes orchestration.

Java EE 8 also supports asynchronous communication both on the client and server. Better yet, never fret over security because with Java EE Security API 1.0 Java EE 8 allows you to handle security issues across the entire framework in a consistent manner.

What about validation? This is still built into the bean layer with Bean Validation 2.0.

Coding thousands of lines in the weeds and having dependencies with every known open-source based technology is not something you will need to worry about when using Java EE 8 - just focus on the business requirements and designing the most maintenance free user friendly lean applications.

Modularity is now part of the new Java SE 9 code library that Java EE 8 requires. In Java SE 9 logically grouping classes in packages that can further be a part of modules within a project significantly enhances reusability especially since these modules can also be built separately using a build and dependency resolution tool like Gradle. These modules may also be distinct domain isolated build artifacts that can be generated when needed and reused in other projects.

Domain-Driven Design bounded-contexts can also be easily enforced through the use of Java packages. High-level packages or modules could reflect contexts of a particular domain like an online store. The packages could be purchase, ship, merchandise, and user all contexts of the domain store. Internal to each package/context could be found sub-packages split by technical concern like view, controller, model, and persistence. Each domain context Java package would have the same technically categorized sub-packages.

No matter what architectural design pattern is selected never over complicate the architecture to the point that the development process is seriously hindered. Stay clear of cargo cult programming, a dogma driven approach inspired by a constrained herd-mentality perspective. Spring 5 and especially Spring Boot are a marvelous group of technologies but always opt to use Java EE 8 or later instead of Spring MVC. Java EE's intrinsic MVC design pattern allows developers to concentrate on the important accepts of their application instead of 'wiring' the MVC design pattern into their systems.

One size fits all technologies do not exist. Just because a technology is new does not negate the use of more seasoned approaches that might still be used to solve complex challenges. Always attempt to isolate technologies that reduce developer complexity, enhance maintainability, and meet the architectural requirements of the applications you will eventually field to production. Never pick the latest shiny tools off the 'shelf' just because everyone else is reaching for them - research before you recommend and implement any technology strategy.


My Early 1999 - 2004 Trailblazing Applications and Concepts that Contributed to Many of Our Most Advanced Modern Technologies

HTML Editor - Direct Editing of HTML Document


RDML Editor - One of the First XML Databases


Spherex - Early Progenitor of NLBIS

NLBIS and it's Progenitor Applications

NLBIS - Manual & Autonomous Control


NLBIS - Record/Playback

In 2006 NLBIS was the first Multiagent Multidimensional-Domain Online Reasoning System.

With NLBIS a knowledge base designer can create multiple worlds with intended interpretations using the precepts of propositional calculus fused to object-oriented representations.

The ontology of a NLBIS world [the axiomatization of domain(s)] is specified by the knowledge base designer through the use of the Hypervisor-Brain (admin layer) that writes or updates the ontologies to RDML (an XML schema) files.

NLBIS explicitly represents the state-space for a domain with state-feature pairs that specify the resulting desired state after carrying out actions (the act of transitioning to the desired state) that are tied to a specific transition state's features. Once NLBIS nodes resolve to their desired state this desired state becomes their current state.

The domains that are the NLBIS multidimentional representation of worlds created by the knowledge base designer also have there own deterministic planner assimilated into the highest level Universe view deterministic planner of NLBIS.

Each node runs in its own asychronous distributed process within a domain that runs within its own asynchronous distributed process with all of them running concurrently in NLBIS.

Coordinated Integrated Unbounded Rationality

NLBIS SoS from my BlueNova Software










Preemptive State Control

Within a system such has NLBIS that monitors and controls all slave systems within process it would be quite feasible to not only monitor slave system states and inhibit them from moving outside desired states but preemptively move slave systems from their current state1 to current state2 based upon analysis of well defined external & internal stimuli affecting the slave systems. Essentially, a probability of a slave system moving from one current state to another current state could be determined based upon historical past metrics related to external & internal stimuli affecting the slave system. This would add another layer of intelligence to systems like NLBIS that already deterministically control their system-of-systems slaves to adhere to desired states. These other states might become a subset of desired states possibly called preemptive states.


Far too often multiple third party API's are used in the development of a system simply because of the seemingly relevant business expedient of reducing the development timeline. Utilizing a 'glue API' that doesn't provide a significant technological advantage over a far simpler self developed construct may increase the likelihood of failure within a system. It must be remembered that the introduction of any code and most of all an entire code library (API) into a system will affect a system in many ways - some of those effects may be negative. State Stability Theory is relevant in describing not only the effects of 1...n sub-systems linked to a network but also 1...n code libraries utilized within a system. In effect, each and every code library utilized within a system becomes a point of failure that is much more difficult to debug and maintain. The savings gleaned from utilizing a code library should be compared to the increased probability that maintenance of an entire system increases with the introduction of each library.

Desired states of a system are highly dependent upon the states of the namespace delimited system components some of which are also libraries, their associated objects, and the objects associated methods or functions. Therefore the benefits obtained from the utilization of any library should outweigh the potential disadvantages of the introduction of an element into a system that may have other indeterminant overall system state effects, increased maintenence costs, increased overall system footprint, and the time expended on developers learning the new library. Nothing above suggests that libraries shouldn't be utilized only that there utilization should first be subjected to a cost benefit analysis. In some cases development of comperable functionality provided by a library would entail significant effort and time on the part of a development team. It is in these cases that a library might provide advantages that far outweigh its effect upon a system. In fact, if the library state effects can be isolated and compensated for within the system with minimal effort than the usage of the library is advantageous.

Unfortunately, in far too many cases a library is used simply because it provides a limited degree of functionality that the developer could have more simply coded but due to developer inexperience, unrealistic management imposed time constraints, simple laziness on the part of the developer, or any number of other reasons a library is utilized whereby only a small fraction of its capability is utilized. These are the 'red flag' cases that expose a system to unpredictable state effects precipitated by the library, an increased system footprint, increased maintenance costs due to the libraries indeterminate state effects on the system and other libraries (my State Stability Theory). Essentially, only use a library when the benefits far outweigh the risks and when that is the case make it a point to completely understand the libraries effects on the stability of the system and where possible compensate for these state stability effects.

When something starts to seem too complex it probably is - simplicity equates to elegance and this is natures check on instability. Simplify, to the point of intellectual pain.


NLBIS Engineering Notes


NovaFire screens from my BlueNova Software



My Very Early SseSystem from 2002

SseSystem JavaDocs API

SSESystem Later UML Models

Strategic Management Pro

UML Diagrams
Source Code


Some development tools that I use...

-->