Thing Relationship Management (TRM)


Things-2.jpgHistorically, the term Internet of Things (IoT), describes a physical network of embedded dedicated objects sensing and interacting with their own and external environments. The advances in context-aware software to “learn and analyze” create scenarios where things are becoming active players in digital relationships. Imagine a near future where “Things” become independent business entities with pre-determined capacity to act like “customers” or “suppliers” within a commercial construct. Through automation, Things would be able to make their own purchasing decisions, receive messages, request service, negotiate for the best terms and report disputes - essentially just like a human would. Along the same growth trajectory is “algorithmic business” where the interaction, exchange, interplay and network effect of value is encapsulated in programming logic and inserted in the transaction flow between customers and suppliers. At the intersection of these two trends lie not only new opportunities for revenue generation and operational efficiencies but also new ways for managing relationships. Much like we do today with Customer Relationship Management (CRM), leaders will need to develop strategies for Thing Relationship Management (TRM).

A useful thought example would be a Thing that has a service utility requiring replenishment of supply, such as a soap dispenser in a hospital. The monitoring system would detect its refill requirement and before initiating an alert to housekeeping, it checks the on-site inventory. There could be pre-determined business logic that requires refilling within an hour and if the on-site inventory is depleted, it begins to initiate a refill order with the preferred supplier. However, the preferred vendor cannot fill until the following day and here is where the Thing becomes a pro-active commercial participant in the supply chain. It begins successive requests to alternative suppliers and negotiates the best price and terms for delivery. It places the order when it finds a supplier who can meet the specification. The implications of this scenario are far reaching. The Thing will need a digital identity, delegated authority, trust levels and financial compliance for auditing just to name a few. Managing these attributes would be very similar to how we manage relationships today in the sales process. Things would essentially be viewed like “people” within a broad set of commercial transactions. I expect to see an adaptation of CRM to TRM in the very near future.


There is a growing interest in sharing clinical algorithms on the part of medical institutions utilizing proven open marketplace models. The idea is to provide advanced analytic algorithms freely while charging provider delivery organizations for other related tools. Hospitals, surgical centers, home-health agencies, outpatient facilities, labs and urgent care centers do not have economical access to the best-in-class analytic engines without contracting with specialty vendors. Many of these provider groups have their own data scientists, researchers and clinicians mining and analyzing their vast amounts of healthcare information. However, developing and testing algorithms designed to improve quality, patient outcomes or administrative operations using this data is expensive and time consuming.

So what are these algorithms? Authored by some of the most advanced data and medical scientists in the world, they include clinical pathways, protocols, quality & safety measures and disease predictors - just to name a few. They are all evidence-based and offered by organizations such as the Mayo Clinic, Cleveland Clinic and many other medical institutions globally. Technology providers in Big Data & Analytics have historically offered these tools and services to their provider customers. But now the medical centers themselves are offering to share, promote and sell their expertise and knowledge via a marketplace model such as those offered by Apervita or Teradata’s Aster Community. These are not simple formulas or canned reports. They are precision algorithms designed to produce the highest level of accuracy with the lowest level of false positives for clinical outcomes. With widespread use, peer review and outcome case studies by provider users; the best ones will naturally rise in popularity and quality ratings. Much of consumer selection of products and services are “review” based with recommendations and star type ratings available from relevant online sites. More of this transparency is coming to healthcare.

These “pre-packaged” insights are important to providers who are moving to value-based healthcare where delivering preemptive and predictive clinical decisions at the point of care is critical. By improving patient outcomes with fewer repeat visits and less trial and error, we believe this lowers the overall cost of healthcare for everyone. Many providers are not equipped to create and deliver an extensive portfolio of predictive or prescriptive models to improve population health. An open marketplace reduces or prevents vendor lock-in and by testing and validating the offered models with their own data, providers also make them better. It’s a continuous loop of discovery, testing and validation that is driven on data portability, self-service and open access. More importantly, this forces new valuation models for information assets on the part of these disparate providers and institutions. Through subscription fees on a trading platform, stakeholders create monetization opportunities to continue ongoing funding and investment of their own capabilities. As industry participants learn how to sell, trade or license their intellectual property, while providing much of it freely, they can help drive innovation in the healthcare sector where it is greatly needed.

Get Small to Achieve Big


Thumbnail image for Honeycomb structureBy definition managing “big” development projects usually translates into “big” teams with “big” capital investments. Contemporary analysis of traditional IT projects indicates these big projects also have a high degree of failure risk. Many leaders believe having extensive reporting and oversight will ensure timely completion and reduce risk. However in meetings and reviews, confidence in outcomes is always debated and we tend to manage to worst case. I think it’s easy to look at the continuous software delivery of large social media companies and think the same can be achieved at a large enterprise. One big difference relies on the mission criticality of enterprise applications, especially in healthcare. Sometimes software development suffers from Parkinson’s Law, which is the adage that “work expands so as to fill the time available for its completion.” It’s best to attack Big with Small as in Small Teams. Project resourcing is non-linear - by doubling resources, you don’t double output. Large teams have inherent management overhead just to keep them going and it’s better to parse out the work. You have to delegate authority to your program leads. We have a concept of “staying on the reservation.” If you let your leaders know their decision-making levels, then they can confidently make decisions without wandering off the reservation. The voice of the customer has become a new buzzword in agile development recently. Augmenting development with customer feedback in real time using best practices from market research will result in better products from the get go. Design is so much more than just gathering requirements - design is everything! Utilizing DevOps and related technology tools will speed up delivery and support of the complete solution and you can move from a minimum viable product to a minimum marketable product more quickly. As teams learn, work and grow together, mutual trust will develop and competency will increase. Only through trust will team members feel secure enough in their role to be transparent with their abilities and ask for help early on before their work goes off track. A smaller team at a smaller table is a better place to work out those types of development challenges.

Imagination Engineers Wanted


3dprintingImage.jpgA technology area that drew increased interest and investor capital last year was 3D printing. While the underlying mechanics have been around for over 30 years, it has recently become a hot topic for both consumers and business leaders. It combines many tech domains such a software, materials science, manufacturing, optics and both mechanical and electrical engineering. It changes the way we think about building things because rather than using “subtractive” methods, like cutting blocks of material or “formative” methods using molds, it uses “additive” techniques. Items are built layer by layer from digital design files using many different printing technologies. It has the power to create a multitude of niche, personalized products as well as enabling a transformation in industrial manufacturing on a global scale. Promising applications for healthcare such as skeletal implants, prosthetics, replacement windpipes, facial implants and dentistry are already emerging. Companies like Align Technologies for straighter smiles and Organovo for living tissues are leading the way. Scientists are growing human cells from biopsies or stem cells then using 3D printers to arrange them in ways the human body does. VentureBeat reports that there were more than 50 new startups raising capital in 2014 and over 40+ crowdfunding projects. There was even a 3D printed car by Local Motors at the Detroit Auto Show. The underlying technologies such as Selective Laser Sintering (SLS), Fused Deposition Model (FDM Stratasys) and Stereolithography (SLA) are fascinating and will continue to attract research and development talent. Some analysts characterize 3D printing in the Internet of Things category and include it in many “Top 10” technologies for 2015. It has gained a large hobbyist following and has sprung up a whole cottage industry. You can even make a 3D figurine of yourself using Doob by jumping into a 3D photo booth! (Image: Ben Sandler)

Mobile Developers swiftly learning Swift


apple-swift-lopezunwired.jpeg There has been quite a bit of buzz recently over a report by RedMonk showing the rapid developer adoption of Apple Swift. The new programming language was introduced in June of 2014 and is Apple’s successor to Objective-C. The report draws correlation between discussions on Stack Overflow along with code usage statistics from Github. The authors caution to take the numerical rankings with a grain of salt. Many comments correctly point out that it simply reflects how interested developers are, but not necessarily any demand from employers seeking those skills. Objective-C is a powerful language but suffers from many of the low-level technical syntaxes that challenge C and C++ developers. Swift is intended to be a replacement to Objective-C while maintaining high compatibility and integration. Swift cleverly combines object-oriented & functional programming with dynamic language features along with runtime managed code support like Microsoft .Net. I used to always say, “Happiness is managed code.” The interactive coding and debugging in Swift Playgrounds integrated with Apple’s Xcode IDE adds a bit of RAD (rapid application development) to the developer experience. Most mobile application firms are doing early prototypes and training for Swift with their iOS developers and the language will continue to mature. Since mobile app projects have relatively shorter development cycles than enterprise applications, the adoption should continue at an accelerated pace. Many Ruby developers will also find that Swift is easer to use and understand than Objective-C. Existing Apple shops with a good inventory of Objective-C libraries can use those assets with Swift since the new language is cross-compatible. Within three years, I expect it to replace not only native iOS apps but also applications for Mac OS too.

A Crowdtester in Every Garage


crowdtesting Crowdtesting is another market disruption that stems from Crowdsourcing and is making its way into the software development world. Crowdtesting is a new way to verify and validate application testing on a variety of dimensions including functional, usability, performance and mobile capabilities. The testing approach is most popular with organizations developing customer-facing applications that are mobile or web-based. One benefit of crowdtesting in the agile process is that it provides early feedback from a broad testing population pool not tied to the organization or the development team. A couple of firms amassing a large population of testers are uTest with 100,000 and Mob4hire with 60,000 registered testers. The crowdtesting companies provide tools, training and a community of interest for their virtual workers. There are two primary delivery options, termed “communities,” and come as “vetted” and “unvetted.” The application owner would select a vetted community if functional, performance, security or localization testing were needed. An unvetted community would be selected for early exploratory or usability testing. Application owners are responsible for understanding and verifying the testing methodologies used so that bug identification controls are in place to prevent defect leakage or reinjection. Any crowdtesting initiative must pass company compliance and regulatory controls since by its very nature, can cause security risks. Payments for services are typically based upon the number of defects found; a defined pre-allocated budget based on contest or outcomes, or per device and platform for mobile apps. By structuring the contract based on quantity found or time-bounding the process, you avoid protracted testing time that could delay implementation. Crowdtesting is becoming commercially interesting to global service providers with application service resources on their bench. If service providers have steady business that absorbs 75% utilization of their testing staff, they can deploy the remaining 25% to crowdtesting. This improves their operating profit as it keeps their overall resource utilization rates very high.

Healthy Software Development


Thumbnail image for Thumbnail image for DevOps1.pngHigh-quality DevOps practices produce a seamless flow of continuous development, deployment and maintenance of large-scale web applications. Today’s business environment demands accelerated releases of functionality from inception to production code. The notion of allowing developers to push code directly into production sets off alarms for most traditional IT leaders. Many experienced development managers recall stories of mistakes made in production environments, some resulting in significant business disruption. However, leading large-scale consumer sites such as Google, Facebook, Netflix and Amazon have adopted these practices and push small code releases every day at an enormous rate. Agile DevOps has its roots in Lean and Kanban. In lean manufacturing, cycle time is defined as the time from when a work product is started to when the finished work product is delivered. For software, this corresponds to the time between when a user story is created to when the story is real code in production. For many high volume sites, the preferred batch size for a release is a single user story. Each story is put into production as soon as it’s complete. Due to massively dense server farms, there is no way to do this manually and automation tools must be used. We see engineering style alignment with these tools such as Puppet or Chef. If you’re coming from the dev side of DevOps, then the procedural nature of Chef using Ruby is natural. Puppet appeals to Ops pros since it is more mature, data driven and geared toward sysadmins. DevOps seeks to bring these styles together. The freedom of allowing developers to push code into production also comes with the responsibility to ensure stability of their code after deployment. A cultural shift from Project-based IT to Product-based IT is necessary to make DevOps successful. If not, then you have speedy agile development sprints constrained within quarterly or monthly waterfall release cycles. Bottlenecks upstream and downstream outside of normal scope are addressed more readily in a product-based construct. The attitude changes from “it’s not my job” to “it’s my workflow” - and a strong sense of ownership accompanies this new attitude. With increased velocity & cycle time, we can use metrics of mean time between failure and mean time to restore service rather than the number of lines of code produced or number of defects resolved.

Big Data enables context-aware Security


Thumbnail image for IS-Security.jpg Enterprises are increasingly required to open and extend their network boundaries to suppliers, partners and customers to support innovative value chains and information collaboration. This scenario, along with more corporate applications being accessed over the cloud & mobile devices, makes firms vulnerable to sophisticated security threats. Big Data Analytics (BDA) applied to enterprise security promises to bring a new level of intelligence to network forensics and risk management. Information security will be more intelligence-driven, contextual and risk aware in real-time. Gaining insights into what big data is telling us about security threats is the hard part. Collecting the data is the easy part. BDA frameworks, along with reductions in infrastructure costs for data warehouses generate massive clusters of computers that can be managed efficiently, and with fewer people. These economics will disrupt traditional monitoring, SIEM (Security Incident & Event Manager), identity management and governance, risk & compliance (GRC) in the field. Contemporary SIEM devices do aggregation & correlation at roughly thousands of events per second. More sophisticated security management platforms that are BigData-enabled should be able to process millions of incident events per second with the same hardware footprint. Historically you've had to do significant filtering, factoring and reduction of security data to get to a manageable size that allows security professionals to perform analysis and make decisions. Now, being able to mine petabytes of operational and security risk data from diverse data sources will provide actionable intelligence in real time. It is expected this "mining" can be done with industry standard 3rd party applications through open-source methods. BDA enables highly efficient batch processing to analyze historical data, find out when the attack started, how initial probing went undetected and how the attacker breached your system. The use of big data analytics in an enterprise security context provides for situational awareness, automates the threat detection processes, improves reaction times and will ultimately help with prevention. Watch for startups innovating in this space.

Thumbnail image for Thumbnail image for iStock_biotech1.jpg

I've been studying crowdfunding and how the new federal JOBS act will attempt to allow non-accredited investors access to seed rounds in early stage startups. Once limited to artistic endeavors, charity and filmmaking, the concept has grown from the likes of Kickstarter and Indiegogo to more prevalent in equity investment circles. According to, there are nearly 1,000 crowdfunding sites in existence but until the SEC enacts Title III of the JOBS ACT, we won't see the new equity crowdfunding portals provided for by the law - not yet. One capital-intensive area, biotechnology, won't see this type of funding as a replacement for traditional venture capital anytime soon. According to Scott Jordan from HealthiosExchange, the average successful biotech company raises $49 million over 5.7 years through a series of private equity rounds. I agree with his assertion that crowdfunding would help these firms achieve milestones during the seed stage that will ultimately get VCs interested. There are already sites connecting a wider range of accredited angel investors and allowing them to syndicate with each other thereby taking more positions in a portfolio of biotech startups. Diversification and "failing fast" is tremendously important in life sciences development and research.

Big Data Museums need Human Curators

Thumbnail image for Orlando-Museum-of-Art.jpg
Most analysts define "Big Data" subjectively as information datasets whose size is beyond the ability of mature software tools to capture, store, manage and analyze it. As people and business go on about their lives, they generate huge data exhaust as a by-product of social media, smartphones, computing and embedded devices. Since it is very hard for machines to pull operational insights out of big data, there is a rise in the need for data scientists often referred to as data "curators." Much like a museum curator collects, catalogues, interprets and preserves artwork or historic items, a data curator works to improve the quality of data-driven information within their operational processes. This also involves active lifecycle management that attempts to connect the sciences, social sciences and humanities. Even though there are programs that can poll APIs for AWS or GitHub and pull out somewhat structured data, it cannot be fully interpreted without human intervention. This is good news because with our newfound tools, people will transform the study social sciences to more digital humanities where insightful connections are made to economics, law, medicine, education and communication.

Enhanced by Zemanta

About Paul Lopez

Paul Lopez Paul Lopez is a 20+ year technology veteran whose career has spanned multiple disciplines such as product management, software development, engineering, marketing, business development and operations... read more


Connect with Paul

Powered by Movable Type 6.3