Microsoft plans to extend IntelliSense code analysis for Python to tools beyond Visual Studio, using its Python Language Server. IntelliSense provides autocompletions for variables, functions, and other symbols that appear as developers type code.
Available as a beta in the July release of the Python extension for Visual Studio, the Python Language Server will be offered later this year as a standalone component for use with tools that support the Language Server Protocol. That protocol lets editing tools and IDEs support multiple languages.
Two leading analysts’ reports, from Intersect360 Research and Hyperion Research, show the high-performance computing market has reached an inflection point. The cloud segment includes Microsoft, Amazon Web Services, and Google.
Intersect360 says high-performance cloud spending by high-performance computing customers grew by 44 percent from 2016 to 2017, to about $1.1 billion—much faster than the growth in the total high-performance computing market, which is still mostly traditional on-premises hardware clusters.
The two related reasons for the faster cloud adoption of high-performance computing are pretty clear to me.
Collette Stumpf is a software designer at Surge.
Successful software projects please customers, streamline processes, or otherwise add value to your business. But how do you ensure that your software project will result in the improvements you are expecting? Will users experience better performance? Will the productivity across all tasks improve as you hoped? Will users be happy with your changes and return to your product again and again as you envisioned?
AI’s rapid evolution is producing an explosion in new types of hardware accelerators for machine learning and deep learning.
Some people refer to this as a “Cambrian explosion,” which is an apt metaphor for the current period of fervent innovation. It refers to the period about 500 million years ago when essentially every biological “body plan” among multicellular animals appeared for the first time. From that point onward, these creatures—ourselves included—fanned out to occupy, exploit, and thoroughly transform every ecological niche on the planet.
A distributed file system, a MapReduce programming framework, and an extended family of tools for processing huge data sets on large clusters of commodity hardware, Hadoop has been synonymous with “big data” for more than a decade. But no technology can hold the spotlight forever.
While Hadoop remains an essential part of the big data platforms, and the major Hadoop vendors—namely Cloudera, Hortonworks, and MapR—have changed their platforms dramatically. Once-peripheral projects like Apache Spark and Apache Kafka have become the new stars, and the focus has turned to other ways to drill into data and extract insight.[ The essentials from InfoWorld: What is Apache Spark? The big data analytics platform explained • Spark tutorial: Get started with Apache Spark • What is data mining? How analytics uncovers insights. | Cut to the key news and issues in cutting-edge enterprise technology with the InfoWorld Daily newsletter. ]
Let’s take a brief tour of the three leading big data platforms, what each adds to the mix of Hadoop technologies to set it apart, and how they are evolving to embrace a new era of containers, Kubernetes, machine learning, and deep learning.
Anaconda, the Python language distribution and work environment for scientific computing, data science, statistical analysis, and machine learning, is now available in version 5.2, with additions to both its enterprise and open-source community editions.[ Tutorial: How to get started with Python. | Go deeper with the InfoWorld megaguide: The best Python frameworks and IDEs. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ] Where to download Anaconda 5.2
The community edition of Anaconda Distribution is available for free download directly from Anaconda’s website. The for-pay enterprise edition, with professional support, requires contacting the Anaconda (formerly Continuum Analytics) sales team.
Version 2.14 of GitHub Enterprise, the behind-the-firewall version of GitHub’s code-sharing platform tuned for businesses, improvement configuration visibility and adds anonymous Git read access.
Users can configure visibility for new members of an organization, across private or public instances. Administrators also can prevent users from changing their visibility from the default configuration. Default settings can be enforced through a command-line utility.[ So, just what is GitHub, exactly? • GitHub tutorial: Get started with GitHub. • 20 essential pointers for Git and GitHub. • What’s new in GitHub’s Atom text editor. | Keep up with hot topics in programming with InfoWorld’s App Dev Report newsletter. ]
GitHub Enterprise Version 2.14 also adds the ability for administrators to enable anonymous Git read access to public repositories when an instance is in a private mode. Anonymous read access can let users bypass authentication requirements for custom tools on an instance.
Azure’s service platform’s adoption of Kubernetes and containerschanges how you build, deploy, and manage cloud-native applications, treating containers and services as the targets of your builds, rather than the code that makes up those services.
Kubernetes itself automates much of what had been infrastructure tasks, orchestrating and managing containers. Azure’s AKS tools simplify configuring Kubernetes, but you need to deploy straight into an AKS instance—a hurdle for anyone developing new apps or handling a migration of an existing service. Although AKS itself isn’t expensive, setting up and tearing down orchestration models takes time—time that can better be spent writing and debugging code.
How much does you’re public cloud cost month to month? If you don’t know, you’re hardly alone. Most people in IT don’t have a good understand of what a public cloud service costs per month. Most wait to find out what the bill says rather than proactively monitor cloud consumption, much less have cloud cost governance in place.
Even if your financial budgeting model can handle uncertain costs, not knowing what you’re spending has a downside. When you moved to the public cloud, your company put a value driver in place when defining the business cases—and part of that was based on ongoing costs per month.[ Get started: Azure cloud migration guide. • Tutorial: Get started with Google Cloud. | Keep up with the latest developments in cloud computing with InfoWorld’s Cloud Computing newsletter. ]
If those costs are higher than originally estimated, the value metrics won’t support your goals. Although you can make a case for the cloud’s value around agility and compressing time to market, that will fall on deaf ears among your business leaders if you’re 20 to 30 percent over budget for ongoing cloud costs.
Deep learning is an important part of the business of Google, Amazon, Microsoft, and Facebook, as well as countless smaller companies. It has been responsible for many of the recent advances in areas such as automatic language translation, image classification, and conversational interfaces.
We haven’t gotten to the point where there is a single dominant deep learning framework. TensorFlow (Google) is very good, but has been hard to learn and use. Also TensorFlow’s dataflow graphs have been difficult to debug, which is why the TensorFlow project has been working on eager execution and the TensorFlow debugger. TensorFlow used to lack a decent high-level API for creating models; now it has three of them, including a bespoke version of Keras.
ASP.Net Web API is a lightweight framework that can be used for building RESTful HTTP services. When working with controller methods in Web API, you will often need to pass parameters to those methods. A “parameter” here simply refers to the argument to a method, while “parameter binding” refers to the process of setting values to the parameters of the Web API methods.
Note that there are two ways in which Web API can bind parameters: model binding and formatters. Model binding is used to read from the query string, while formatters are used to read from the request body. You can also use type converters to enable Web API to treat a class as a simple type and then bind the parameter from the URI. To do this, you would need to create a custom TypeConverter. You can also create a custom model binder by implementing the IModelBinder interface in your class and then implementing the BindModel method. For more on type converters and model binders, take a look at this Microsoft documentation.
According to a recent report from IDC, “worldwide revenues for big data and business analytics will grow from nearly $122 billion in 2015 to more than $187 billion in 2019, an increase of more than 50 percent over the five-year forecast period.”
Anyone in enterprise IT already knows that big data is a big deal. If you can manage and analyze massive amounts of data—I’m talking petabytes—you’ll have access to all sorts of information that will help you run your business better.[ The essentials from InfoWorld: What is big data analytics? Everything you need to know • What is data mining? How analytics uncovers insights. | Go deep into analytics and big data with the InfoWorld Big Data and Analytics Report newsletter. ]
Right? Sadly, for most enterprises, no.
One key devops best practice is instrumenting a continuous integration/continuousdelivery (CI/CD) pipeline that automates the process of building software, packaging applications, deploying them to target environments, and instrumenting service calls to enable the application. This automation requires scripting individual procedures and orchestrating the steps from code checkin to running application. Once matured, devops teams use the automation to drive process change and strive to do smaller, more frequent deployments that deliver new functionality to users and improve quality.
Sebastian Stadil is the CEO and founder of Scalr.
Enterprises are moving to multicloud in droves. Why? The key drivers most often cited by cloud adopters are speed, agility, platform flexibility, and reduced costs—or at least more predictable costs. It’s ironic then that more than half of these companies say that runaway cloud costs are their biggest postmigration pain point.
Sebastian Stadil is the CEO and founder of Scalr.
Enterprises are moving to multi-cloud in droves. Why? The key drivers most often cited by cloud adopters are speed, agility, platform flexibility, and reduced costs—or at least more predictable costs. It’s ironic then that more than half of these companies say that runaway cloud costs are their biggest post-migration pain point.
The power of Docker images is that they’re lightweight and portable—they can be moved freely between systems. You can easily create a set of standard images, store them in a repository on your network, and share them throughout your organization. Or you could turn to Docker Inc., which has created various mechanisms for sharing Docker container images in public and private.
The most prominent among these is Docker Hub, the company’s public exchange for container images. Many open source projects provide official versions of their Docker images there, making it a convenient starting point for creating new containers by building on existing ones, or just obtaining stock versions of containers to spin up a project quickly. And you get one private Docker Hub repository of your own for free.
I hear it every day now: “We’re moving beyond cloud computing to edge computing.” Pretty hypey, and not at all logical.
Edge computing is a handy trick. It’s the ability to place processing and data retention at a system that’s closer to the target system it’s collecting data for as well as to provide autonomous processing.[ What is cloud computing? Everything you need to know now. | Also: InfoWorld’s David Linthicum explains what exactly is edge computing. ]
The architectural advantages are plenty, including not having to transmit all the data to the back-end systems—typical in the cloud—for processing. This reduces latency and can provide better security and reliability as well.