Imagination Engineers Wanted

|

3dprintingImage.jpgA technology area that drew increased interest and investor capital last year was 3D printing. While the underlying mechanics have been around for over 30 years, it has recently become a hot topic for both consumers and business leaders. It combines many tech domains such a software, materials science, manufacturing, optics and both mechanical and electrical engineering. It changes the way we think about building things because rather than using “subtractive” methods, like cutting blocks of material or “formative” methods using molds, it uses “additive” techniques. Items are built layer by layer from digital design files using many different printing technologies. It has the power to create a multitude of niche, personalized products as well as enabling a transformation in industrial manufacturing on a global scale. Promising applications for healthcare such as skeletal implants, prosthetics, replacement windpipes, facial implants and dentistry are already emerging. Companies like Align Technologies for straighter smiles and Organovo for living tissues are leading the way. Scientists are growing human cells from biopsies or stem cells then using 3D printers to arrange them in ways the human body does. VentureBeat reports that there were more than 50 new startups raising capital in 2014 and over 40+ crowdfunding projects. There was even a 3D printed car by Local Motors at the Detroit Auto Show. The underlying technologies such as Selective Laser Sintering (SLS), Fused Deposition Model (FDM Stratasys) and Stereolithography (SLA) are fascinating and will continue to attract research and development talent. Some analysts characterize 3D printing in the Internet of Things category and include it in many “Top 10” technologies for 2015. It has gained a large hobbyist following and has sprung up a whole cottage industry. You can even make a 3D figurine of yourself using Doob by jumping into a 3D photo booth! (Image: Ben Sandler)

Mobile Developers swiftly learning Swift

|

apple-swift-lopezunwired.jpeg There has been quite a bit of buzz recently over a report by RedMonk showing the rapid developer adoption of Apple Swift. The new programming language was introduced in June of 2014 and is Apple’s successor to Objective-C. The report draws correlation between discussions on Stack Overflow along with code usage statistics from Github. The authors caution to take the numerical rankings with a grain of salt. Many comments correctly point out that it simply reflects how interested developers are, but not necessarily any demand from employers seeking those skills. Objective-C is a powerful language but suffers from many of the low-level technical syntaxes that challenge C and C++ developers. Swift is intended to be a replacement to Objective-C while maintaining high compatibility and integration. Swift cleverly combines object-oriented & functional programming with dynamic language features along with runtime managed code support like Microsoft .Net. I used to always say, “Happiness is managed code.” The interactive coding and debugging in Swift Playgrounds integrated with Apple’s Xcode IDE adds a bit of RAD (rapid application development) to the developer experience. Most mobile application firms are doing early prototypes and training for Swift with their iOS developers and the language will continue to mature. Since mobile app projects have relatively shorter development cycles than enterprise applications, the adoption should continue at an accelerated pace. Many Ruby developers will also find that Swift is easer to use and understand than Objective-C. Existing Apple shops with a good inventory of Objective-C libraries can use those assets with Swift since the new language is cross-compatible. Within three years, I expect it to replace not only native iOS apps but also applications for Mac OS too.

A Crowdtester in Every Garage

|

crowdtesting Crowdtesting is another market disruption that stems from Crowdsourcing and is making its way into the software development world. Crowdtesting is a new way to verify and validate application testing on a variety of dimensions including functional, usability, performance and mobile capabilities. The testing approach is most popular with organizations developing customer-facing applications that are mobile or web-based. One benefit of crowdtesting in the agile process is that it provides early feedback from a broad testing population pool not tied to the organization or the development team. A couple of firms amassing a large population of testers are uTest with 100,000 and Mob4hire with 60,000 registered testers. The crowdtesting companies provide tools, training and a community of interest for their virtual workers. There are two primary delivery options, termed “communities,” and come as “vetted” and “unvetted.” The application owner would select a vetted community if functional, performance, security or localization testing were needed. An unvetted community would be selected for early exploratory or usability testing. Application owners are responsible for understanding and verifying the testing methodologies used so that bug identification controls are in place to prevent defect leakage or reinjection. Any crowdtesting initiative must pass company compliance and regulatory controls since by its very nature, can cause security risks. Payments for services are typically based upon the number of defects found; a defined pre-allocated budget based on contest or outcomes, or per device and platform for mobile apps. By structuring the contract based on quantity found or time-bounding the process, you avoid protracted testing time that could delay implementation. Crowdtesting is becoming commercially interesting to global service providers with application service resources on their bench. If service providers have steady business that absorbs 75% utilization of their testing staff, they can deploy the remaining 25% to crowdtesting. This improves their operating profit as it keeps their overall resource utilization rates very high.

Healthy Software Development

|

Thumbnail image for Thumbnail image for DevOps1.pngHigh-quality DevOps practices produce a seamless flow of continuous development, deployment and maintenance of large-scale web applications. Today’s business environment demands accelerated releases of functionality from inception to production code. The notion of allowing developers to push code directly into production sets off alarms for most traditional IT leaders. Many experienced development managers recall stories of mistakes made in production environments, some resulting in significant business disruption. However, leading large-scale consumer sites such as Google, Facebook, Netflix and Amazon have adopted these practices and push small code releases every day at an enormous rate. Agile DevOps has its roots in Lean and Kanban. In lean manufacturing, cycle time is defined as the time from when a work product is started to when the finished work product is delivered. For software, this corresponds to the time between when a user story is created to when the story is real code in production. For many high volume sites, the preferred batch size for a release is a single user story. Each story is put into production as soon as it’s complete. Due to massively dense server farms, there is no way to do this manually and automation tools must be used. We see engineering style alignment with these tools such as Puppet or Chef. If you’re coming from the dev side of DevOps, then the procedural nature of Chef using Ruby is natural. Puppet appeals to Ops pros since it is more mature, data driven and geared toward sysadmins. DevOps seeks to bring these styles together. The freedom of allowing developers to push code into production also comes with the responsibility to ensure stability of their code after deployment. A cultural shift from Project-based IT to Product-based IT is necessary to make DevOps successful. If not, then you have speedy agile development sprints constrained within quarterly or monthly waterfall release cycles. Bottlenecks upstream and downstream outside of normal scope are addressed more readily in a product-based construct. The attitude changes from “it’s not my job” to “it’s my workflow” - and a strong sense of ownership accompanies this new attitude. With increased velocity & cycle time, we can use metrics of mean time between failure and mean time to restore service rather than the number of lines of code produced or number of defects resolved.

Big Data enables context-aware Security

|

Thumbnail image for IS-Security.jpg Enterprises are increasingly required to open and extend their network boundaries to suppliers, partners and customers to support innovative value chains and information collaboration. This scenario, along with more corporate applications being accessed over the cloud & mobile devices, makes firms vulnerable to sophisticated security threats. Big Data Analytics (BDA) applied to enterprise security promises to bring a new level of intelligence to network forensics and risk management. Information security will be more intelligence-driven, contextual and risk aware in real-time. Gaining insights into what big data is telling us about security threats is the hard part. Collecting the data is the easy part. BDA frameworks, along with reductions in infrastructure costs for data warehouses generate massive clusters of computers that can be managed efficiently, and with fewer people. These economics will disrupt traditional monitoring, SIEM (Security Incident & Event Manager), identity management and governance, risk & compliance (GRC) in the field. Contemporary SIEM devices do aggregation & correlation at roughly thousands of events per second. More sophisticated security management platforms that are BigData-enabled should be able to process millions of incident events per second with the same hardware footprint. Historically you've had to do significant filtering, factoring and reduction of security data to get to a manageable size that allows security professionals to perform analysis and make decisions. Now, being able to mine petabytes of operational and security risk data from diverse data sources will provide actionable intelligence in real time. It is expected this "mining" can be done with industry standard 3rd party applications through open-source methods. BDA enables highly efficient batch processing to analyze historical data, find out when the attack started, how initial probing went undetected and how the attacker breached your system. The use of big data analytics in an enterprise security context provides for situational awareness, automates the threat detection processes, improves reaction times and will ultimately help with prevention. Watch for startups innovating in this space.

Thumbnail image for Thumbnail image for iStock_biotech1.jpg

I've been studying crowdfunding and how the new federal JOBS act will attempt to allow non-accredited investors access to seed rounds in early stage startups. Once limited to artistic endeavors, charity and filmmaking, the concept has grown from the likes of Kickstarter and Indiegogo to more prevalent in equity investment circles. According to Crowdsourcing.org, there are nearly 1,000 crowdfunding sites in existence but until the SEC enacts Title III of the JOBS ACT, we won't see the new equity crowdfunding portals provided for by the law - not yet. One capital-intensive area, biotechnology, won't see this type of funding as a replacement for traditional venture capital anytime soon. According to Scott Jordan from HealthiosExchange, the average successful biotech company raises $49 million over 5.7 years through a series of private equity rounds. I agree with his assertion that crowdfunding would help these firms achieve milestones during the seed stage that will ultimately get VCs interested. There are already sites connecting a wider range of accredited angel investors and allowing them to syndicate with each other thereby taking more positions in a portfolio of biotech startups. Diversification and "failing fast" is tremendously important in life sciences development and research.

Big Data Museums need Human Curators

|
Thumbnail image for Orlando-Museum-of-Art.jpg
Most analysts define "Big Data" subjectively as information datasets whose size is beyond the ability of mature software tools to capture, store, manage and analyze it. As people and business go on about their lives, they generate huge data exhaust as a by-product of social media, smartphones, computing and embedded devices. Since it is very hard for machines to pull operational insights out of big data, there is a rise in the need for data scientists often referred to as data "curators." Much like a museum curator collects, catalogues, interprets and preserves artwork or historic items, a data curator works to improve the quality of data-driven information within their operational processes. This also involves active lifecycle management that attempts to connect the sciences, social sciences and humanities. Even though there are programs that can poll APIs for AWS or GitHub and pull out somewhat structured data, it cannot be fully interpreted without human intervention. This is good news because with our newfound tools, people will transform the study social sciences to more digital humanities where insightful connections are made to economics, law, medicine, education and communication.

Enhanced by Zemanta

A Spring Awakening ... for Java EE

|

Thumbnail image for SpringSourceVMLogo.pngMuch has been written about the severe Amazon EC2 outage last week. This made me think about the necessary tools needed for deploying high-availability applications in a cloud environment. Java Enterprise Edition is very complex to use but remains the popular choice among enterprise application developers today and it has a huge installed base needing some form cloud readiness. Application platform frameworks, like Spring, provide the runtime middleware container for both custom or packaged applications that run on a cloud service like PaaS (Platform-as-a-Service). Features of programming languages like Java, C#, C++, Ruby or Python; can be extended at runtime by APIs, embedded declarative clauses or metadata patterns provided by a framework like Spring. Optimal allocation of system resources (memory, threads, connection pools etc.), quality-of-service (reliability, availability etc.) and connectivity (messaging, networks & databases) are managed on behalf of the application by the framework. With the huge investment in Java code today, many firms are adopting Spring's Model-View-Controller (MVC) for web applications, plug-ins for the Eclipse IDE and the many web service add-ons available. The beauty of the framework and its relevance to cloud is that Spring separates all business logic from application (logical or physical) infrastructure. Now we can have a real application bus where you combine virtualized or non-virtualized applications with structured or unstructured data. Those applications will be exposed to both managed and unmanaged mobile devices (but that's another blog post!). We would build applications by taking blocks (objects) of code using Spring and populate the business logic using open source lifecycle tools existing outside of the cloud. Dynamic programming models like Ruby can be used as large-scale web-based front-ends with the power of Java EE under the hood extending the life of existing legacy applications in a painless way. With the framework in place, cloud advances in multi-tenant governance; horizontal scaling or cloud transaction processing can take place without causing major application reconstruction.

vmware-ipad.jpg

It is no secret that iPads are gaining traction in the corporate enterprise with its ideal form factor, weight and battery life. What Nicholas Carr didn't see with his iconoclastic report "IT Doesn't Matter," was how the tightening of the technology stack would be enabled by consumer endpoints capable of running any application from a data center or the cloud. I was interested to see this week's VMWare View 4.6 for the iPad joining the category of other apps like Citrix Receiver or the new Rackspace OpenStack admin app. VMWare had to program custom gestures to blend the iOS experience with what most users often run on VDI - Windows. Now you can use two-fingers to right click and Apple-like drag & drop features to interact with Windows. Heck, you can even run Flash on the iPad. Unlike Microsoft's protocol, RDP, VMWare uses PCoIP, which transmits only changing pixels to stateless endpoints over the network. Since the protocol can tunnel over HTTPS, proxies or firewalls don't block it. This enables virtual end-to-end tapping; finding your nodes without knowing their physical path. The iPad used as a VDI device helps Microsoft defend their presence in the enterprise since the access device would satisfy users and may reduce the complaints about enterprise application user experience. "The sun shines, and people forget; the spray flies as the speedboat glides and people forget...," people forget how many Windows workstations are still out there.

Near Field coming Near You

|
nfc-hand.png

Even though Near Field Communications (NFC) has been around for 15 years, it could become mainstream in the U.S. smartphone market this year. NFC operates at 13.56Mhz and at speeds from 106 kbits/s to 848 kbits/s all within a 4 cm range. We are finally catching up with Japan (e.g., Osaifu-Keitai system) and other areas of the world where NFC is used for mobile commerce and payments. With better software integration, you now have the intersection of context, proximity and event handlers that blend the physical and virtual worlds. It would make sense if Google announced a mobile payment platform since NFC is natively supported in Android 2.3. You have to consider other players with a little more "trust" than Google such as Apple iTunes or even Paypal. Merchant players like First Data or GPN are reluctant to adopt an offering that is not industry standard. MasterCard and Visa have made progress raising consumer awareness for NFC but financial institutions are not good catalysts for ecosystems. Even though NFC silicon can be standardized, individual competitors that bring their own implementation of payment systems can stall adoption and create payment silos. The battle will be which model will prevail - operator-centric, bank-centric, collaboration-centric or peer-to-peer-centric. Perhaps it doesn't matter since once a user has selected a smartphone platform, they automatically get the mobile payment system. Otherwise we would need a system like "payment roaming" similar to what evolved during the early expansion of cellular networks and billing systems.

About Paul Lopez

Paul Lopez Paul Lopez is a 20+ year technology veteran whose career has spanned multiple disciplines such as product management, software development, engineering, marketing, business development and operations... read more

Twitter

Connect with Paul

Powered by Movable Type 6.1