2016 week 50 in programming

AMD creates a tool to convert CUDA code to portable, vendor-neutral C++

HIP allows developers to convert CUDA code to portable C++. The same source code can be compiled to run on NVIDIA or AMD GPUs. HIP provides porting tools which make it easy to port existing CUDA codes to the HIP layer, with no loss of performance as compared to the original CUDA application. HIP is not intended to be a drop-in replacement for CUDA, and developers should expect to do some manual coding and performance tuning work to complete the port. Programmers familiar with CUDA will also be able to quickly learn and start coding with the HIP API. Compute kernels are launched with the “HipLaunchKernel” macro call. On the Nvidia CUDA platform, HIP provides header file which translate from the HIP runtime APIs to CUDA runtime APIs. The header file contains mostly inlined functions and thus has very low overhead - developers coding in HIP should expect the same perforamnce as coding in native CUDA. The code is then compiled with nvcc, the standard C++ compiler provided with the CUDA SDK. Developers can use any tools supported by the CUDA SDK including the CUDA profiler and debugger. The HIP runtime implements HIP streams, events, and memory APIs, and is a object library that is linked with the application.

Pixar’s use of mass-spring systems (Pixar in a Box)

None

Oracle is massively ramping up audits of Java customers it claims are in breach of its licences – six years after it bought Sun Microsystems

Oracle is massively ramping up audits of Java customers it claims are in breach of its licences - six years after it bought Sun Microsystems. Oracle bought Java with Sun Microsystems in 2010 but only now is its License Management Services division chasing down people for payment, we are told by people familiar with the matter. The Register has learned of one customer in the retail industry with 80,000 PCs that was informed by Oracle it was in breach of its Java agreement. That perception dates from the time of Sun; Java under Sun was available for free - as it is under Oracle - but for a while Sun did charge a licensee fee to companies like IBM and makers of Blu-ray players, though for the vast majority, Java came minus charge. Java SE is a broad and all-encompassing download that includes Java SE Advanced Desktop, introduced by Oracle in February 2014, and Java SE Advanced and Java SE Suite, introduced by Oracle in May 2011. If you want to roll out Java SE in a big deployment, as you would following development of your app, then you’ll need Microsoft Windows Installer Enterprise JRE Installer - and that’s not part of the free Java SE. “People aren’t aware,” Guarente told The Reg. Why is Oracle acting now, six years into owning Java through the Sun acquisition?

Dear hackers, Ubuntu’s app crash reporter will happily execute your evil code on a victim’s box

Researcher Donncha O’Cearbhaill, who discovered and privately reported the vulnerabilities to the Ubuntu team, said that a successful exploit of the bugs could allow an attacker to remotely execute code by tricking a victim into downloading a maliciously booby-trapped file. In this case, O’Cearbhaill says, his exploit code takes advantage of the Apport crash reporting tool on Ubuntu. By exploiting the flaws, an attacker would have the ability to gain control over the targeted Ubuntu box simply by convincing them to open a single document that then targets the flaws in the crash reporter. O’Cearbhaill has provided a copy of the source code for his proof-of-concept on GitHub, as well as a video showing the exploit in action - opening a ZIP archive from the internet containing a malicious crash report runs a calculator program. “I would encourage all security researchers to audit free and open source software if they have time on their hands,” the researcher said. At the same time, O’Caerbhaill notes the reality that many researchers face the dilemma of selling their discoveries to third-party brokers who may not immediately report the flaws or find other nefarious uses for the zero-day vulnerabilities. “To improve security for everyone we need to find sustainable ways to incentivize researchers to find and disclose issues and to get bugs fixed,” said O’Caerbhaill.

Visual Studio Code 1.8 released

Node.js Debugging - Just My Code, load environment variables from files, help for sourcemaps. Using multi-target debugging is very simple: after you’ve started a first debug session, VS Code no longer blocks you from launching another session. In the November release, only the built-in Node.js debuggers contribute snippets. Use the ‘old’ debugger node when debugging Node.js versions < 6.3 and the new debugger node2 for versions >= 6.3. The VS Code Node debugger now supports to load environment variables from a file and passes them to the node runtime. The OutputEvent type now supports to send structured objects to the debug console and VS Code renders them as expandable objects. If a debug adapter opts into this, the VS Code debugger UI no longer implements the Restart action by terminating and restarting the debug adapter but instead sends a RestartRequest to the adapter.

JetBrains Gogland: Capable and Ergonomic Go IDE

Thanks for your interest in being a part of the private Gogland Early Access Program! We’ve added you to the list and will email you a link to a fresh Gogland EAP build once your request is approved. The IDE is still in its early development stages so it may take some time before a working build is available for your particular environment and requirements.

How Discord handles push request bursts of over a million per minute with Elixir’s GenStage

Stage 1 - the Push Collector: The Push Collector is a producer that collects push requests. Stage 2 - the Pusher: The Pusher is a consumer that demands push requests from the Push Collector and pushes the requests to Firebase. The Push Collector never sends a request to a Pusher unless the Pusher asks for one. Load-sheddingSince the Pushers put back-pressure on the Push Collector, we now have a potential bottleneck at the Push Collector. In the Push Collector, we specify how many push requests to buffer. If there are way too many messages moving through the system and the buffer fills up then the Push Collector will shed incoming push requests. The bottom graph is the number of push requests buffered by the Push Collector.

I am tired of Makefiles

At Cesanta, we use Makefiles to build our firmwares, libraries, and perform other things. I want my incremental builds to be reliable, and I want to be able to reuse my Makefiles as needed. A …. Now, we have a project app with a separate Makefile, and we want to use mylib. We don’t want the app target to always get rebuilt. The app won’t be rebuilt, even though we want it to. My point is that if the author of makefile wants some variable to be overriddable, they should just use FOO ?= foo. There are plenty of other issues, some of them require ancient wisdom to write Makefiles which are correct, but I got used to most of them.

NET Core 1.0 tooling to RTM quality for the Visual Studio 2017 RTM. However, we are also starting to think of the next version of the runtime. NET Core version to target, to change the version of. NET Core repository and GitHub and build the product. Ship Dates Milestone Release Date.NET Core 2.0 Spring 2017.NET Standard 2.0 Spring 2017.NET Core is a general purpose, modular, cross-platform and open source implementation of. Microsoft provides commercially reasonable support for ASP.NET Core 1.0,.NET Core 1.0 and Entity Framework Core 1.0 on the OS and Version detailed in the table above. Microsoft provides support for ASP.NET Core 1.0,.NET Core 1.0 and Entity Framework Core 1.0 on Windows, Linux, and Mac OS X. For an explanation of available support options, please visit Support for Business and Developers. NET Core will ship as part of many Linux distros and we are actively working with key partners in the Linux community to make it natural for.

DCCP: The socket type you probably never heard of

DCCP makes use of Explicit Congestion Notification but it is transparent the application. DCCP can cater to the different needs of applications by allowing applications to negotiate the congestion control schemes. DCCP congestion control schemes are denoted by Congestion Control Identifiers - CCIDs. The largest packet size that does not require fragmentation anywhere along a path is referred to as the path maximum transmission unit or PMTU. Applications can usually get better error tolerance by producing packets smaller than the PMTU. DCCP defines a maximum packet size based on the PMTU and the congestion control scheme used for each connection. Be sure to enable all the CCIDs in the kernel configuration in Networking Support -> Networking Options -> The DCCP Protocol -> DCCP CCIDs Configuration. Like the Debian Installation Guide Says, “Don’t be afraid to try compiling the kernel. It’s fun and profitable.” For now, Linux is the only operating system supporting native DCCP, unless you count the patch for an ancient version of FreeBSD. Example in C. The server and client look almost exactly the same as their TCP counterparts with the exception fo the socket type and setting of the service code. Although Linux DCCP NAT is functional, many intermediate boxes will probably just drop DCCP traffic.

Well, what started as an r/ProgrammerHumor joke is now a real programming language. Enter Coding, a stack-based markup language.

Coding has 2 stacks, one for strings and another for HTML output. Parentheses push to the string stack; for example, would push asdf to the string stack. The standard library has a function S that can push from the string stack to the output stack. >: Pops top string stack element, wraps with a tag name, and pushes to output stack. Adds an attribute to the top output stack element using the popped top string stack element. S: Push the top string stack element to the output stack in text form. Se: Push the HTML string form of the top output stack element to the string stack.

Oracle finally targets Java non-payers – six years after plucking Sun

Oracle is massively ramping up audits of Java customers it claims are in breach of its licences - six years after it bought Sun Microsystems. Oracle bought Java with Sun Microsystems in 2010 but only now is its License Management Services division chasing down people for payment, we are told by people familiar with the matter. The Register has learned of one customer in the retail industry with 80,000 PCs that was informed by Oracle it was in breach of its Java agreement. That perception dates from the time of Sun; Java under Sun was available for free - as it is under Oracle - but for a while Sun did charge a licensee fee to companies like IBM and makers of Blu-ray players, though for the vast majority, Java came minus charge. Java SE is a broad and all-encompassing download that includes Java SE Advanced Desktop, introduced by Oracle in February 2014, and Java SE Advanced and Java SE Suite, introduced by Oracle in May 2011. If you want to roll out Java SE in a big deployment, as you would following development of your app, then you’ll need Microsoft Windows Installer Enterprise JRE Installer - and that’s not part of the free Java SE. “People aren’t aware,” Guarente told The Reg. Why is Oracle acting now, six years into owning Java through the Sun acquisition?

NIST: No character requirements for passwords and no frequent password changes

Authenticator Assurance Level 2 - AAL 2 provides high confidence that the claimant controls the authenticator registered to a subscriber. The authenticator secret is the canonical example of a long term authentication secret, while the authenticator output, if it is different from the authenticator secret, is usually a short term authentication secret. At AAL 2, it is required to have a multi-factor authenticator, or a combination of two single-factor authenticators. Multi-factor OTP authenticators operate in a similar manner to single-factor OTP authenticators, except that they require the entry of either a memorized secret or use of a biometric to obtain a password from the authenticator. In contrast, authenticators that involve the manual entry of an authenticator output, such as out of band and one-time password authenticators, SHALL NOT be considered verifier impersonation resistant because they assume the vigilance of the claimant to determine that they are communicating with the intended verifier. Loss, theft, damage to and unauthorized duplication of an authenticator are handled similarly, because in most cases one must assume that a lost authenticator has potentially been stolen or recovered by someone that is not the legitimate claimant of the authenticator. To facilitate secure reporting of loss or theft of or damage to an authenticator, the CSP SHOULD provide the subscriber a method to authenticate to the CSP using a backup authenticator; either a memorized secret or a physical authenticator MAY be used for this purpose.

GitHub Is Building a Coder’s Paradise. It’s Not Coming Cheap - The VC-backed unicorn startup lost $66 million in nine months of 2016, financial documents show

GitHub Inc. is losing money through profligate spending and has stood by as new entrants emerged in a software category it essentially gave birth to, according to people familiar with the business and financial paperwork reviewed by Bloomberg. GitHub management may have been a little too eager to spend the new money. GitHub lost $27 million in the fiscal year that ended in January 2016, according to an income statement seen by Bloomberg. That’s more than twice as much lost in any nine-month time frame by Twilio Inc., another maker of software tools founded the same year as GitHub. Wanstrath started GitHub with three friends during the recession of 2008 and bootstrapped the business for four years. GitHub quickly became essential to the code-writing process at technology companies of all sizes and gave birth to a new generation of programmers by hosting their open-source code for free. GitHub says it has 18 million users, and its Enterprise service is used by half of the world’s 10 highest-grossing companies, including Wal-Mart Stores Inc. and Ford Motor Co. Some longtime GitHub fans weren’t happy with the new direction, though.

A Visual and Interactive Guide to the Basics of Neural Networks [meant mainly for developers with no AI experience]

For each point, the error is measured by the difference between the actual value and the predicted value, raised to the power of 2. If we add a bias we can find values that improve the model. Our lines can better approximate our values now that we have this b value added to the line formula. The two new graphs are to help you track the error values as you fiddle with the parameters of the model. The neural networks we’ve been toying around with until now are all doing “Regression” - they calculate and output a “Continuous” value. In these problems, the neural network’s output has to be from a set of discrete values like “Good” or “Bad”. Which translates the values to say the network is 88% sure that the inputted value is “Bad” and our friend would not like that house.

Riot Games Engineering: Elementalist Lux: 10 Skins in 30 Megabytes

The initial conversation about Elementalist Lux’s memory requirements sounded something like this: “Lux will have 10 forms, each the size and scope of a full skin. One full skin takes about 20 megabytes of in-game memory, so we would need 200 megabytes for Elementalist Lux.” With a maximum memory budget of 30 megabytes per skin, this obviously was not going to fly. Why did most of our skins need 20 megabytes of in-game memory? And where does all of that memory go for something relatively simple like a game character anyway? The VFX artists took advantage of these techniques to severely reduce the memory overhead of all the of Elementalist Lux’s effects. Mission Accomplished, Right? Celebrations were had, but when we booted up the memory report, we were surprised to see that Elementalist Lux was still 20% over budget! With under a week left until Lux hit PBE, we had to find and fix the discrepancy before it was too late. Some quick sleuthing using our memory reporting tools showed that we were allocating over 10 megabytes of in-game memory to effects without even accounting for textures. With Elementalist Lux however, we were loading upwards of 3000 emitters, and her memory cost quickly ballooned. While we still have more work ahead to modularize the emitter system such that each effect pays the memory cost for only what it uses, this was a great first step that helped bring Elementalist Lux under budget and saved memory overall throughout the game.

All code from “Machine Learning with TensorFlow” is now available on GitHub

This is the official code repository for Machine Learning with TensorFlow. Warning: The book will be released in a month or two, so this repo is a pre-release of the entire code. I will be heavily updating this repo in the coming weeks. Stay tuned, and follow along! :). Get started with machine learning using TensorFlow, Google’s latest and greatest machine learning library.

The Open/Closed Principle

One of the principles - the open/closed principle - is often misunderstood. The author clearly misunderstood the principle and now he is advocating against dependency injection with the reason that it makes extending things a pain. Your code hides its dependencies and has become very hard to test. How do we follow the open/closed principle while using dependency injection at the same time? So why not decouple everything and make the whole thing easily extensible in the spirit of the open/closed principle? We had to write a little more code and more classes, but in the end we ended up with very simple code and little mental overhead compared to using inheritance. We followed the open/closed principle while also following the other SOLID principles.

SEGA game coding in assembly language - computerphile

Matt Phillips is creating a brand new game for a 25 year old console. Computerphile is a sister project to Brady Haran’s Numberphile.

How Dropbox securely stores your passwords

It’s universally acknowledged that it’s a bad idea to store plain-text passwords. If a database containing plain-text passwords is compromised, user accounts are in immediate danger. While this prevents the direct reading of passwords in case of a compromise, all hashing mechanisms necessarily allow attackers to brute force the hash offline, by going through lists of possible passwords, hashing them, and comparing the result. Some implementations of bcrypt truncate the input to 72 bytes, which reduces the entropy of the passwords. Other implementations don’t truncate the input and are therefore vulnerable to DoS attacks because they allow the input of arbitrarily long passwords. As a result, if only the password storage is compromised, the password hashes are encrypted and of no use to an attacker. Our password hashing procedure is just one of many measures we use to secure Dropbox.

[PDF] “Concepts: The Future of Generic Programming” by Bjarne Stroustrup

Designing with concepts What makes a good concept? Ideally, a concept represents a fundamental concept in some domain, hence the name “Concept.” A concept has semantics; it means something; it is not just a set of unrelated operations and types. Examples are C/C++ built-in type concepts: arithmetic, integral, and floating STL concepts like iterators and containers Mathematical concepts like monad, group, ring, and field Graph concepts like edges and vertices; graph, DAG, etc. The first step to design a good concept is to consider what is a complete set of properties to match the domain concept, taking into account the semantics of that domain concept. We call such overly simple or incomplete concepts “Constraints” to distinguish them from the “Real concepts.” 12/11/2016 Page 12 of 23 Stroustrup D-R-A-F-T Concepts 5.5 Matching types to concepts How can a writer of a new type be sure it matches a concept? That’s easy: We simply static assert the desired concept matches. Overload resolution based on concepts is fundamentally simple: If a function matches the requirements of one concept only, call it If a function matches the requirements of no concept, the call is an error If the function matches the requirements of two concepts, see if the requirements of one of those concepts is a subset of the requirements of the other. 8.7 Concepts like classes Based on experience with other languages and experimentation with C++0x concepts, some people are convinced that concepts should be defined like classes. We have worked together on concepts for many years and share some favorite examples that we have used in the design of concepts and to explain concepts.

Steve Wozniak Was My Computer Teacher in 1995

Sara’s dad happens to be Steve Wozniak, co-founder of Apple. Wozniak was teaching my 5th grade class back in 1995, almost a decade after the brain behind the Apple 1 had left the company to start other ventures, including CL9, which brought the first programmable remote control to the commercial market. Steve had a sincere demeanor about the class, but he made sure to keep things interesting. Steve Wozniak leads a conga line of 7th and 8th graders carrying Apple Macintosh PowerBook laptops he purchased for them, in an image dated 1993. Steve Wozniak instructs children in his after-school computer class in the 1990s. My most vivid memory from the class was a conversation I had with Steve. At the end of the year, we were all given a hard copy of Steve’s biography, Steve Wozniak, Inventor of the Apple Computer, by Martha E. Kendall.

printbf – Brainfuck interpreter in printf

Generic POSIX printf itself can be Turing complete as shown in Control-Flow Bending. Here we take printf-oriented programming one step further and preset a brainfuck interpreter inside a single printf statement. An attacker can control a printf statement through a format string vulnerability or if the attacker can control the first argument to a printf statement through, e.g., a generic memory corruption. C sources to see what is needed to setup the interpreter and also look at the tokenizer in toker. Keep in mind that this printbf interpreter is supposed to be a fun example of Turing completeness that is available in current programs and not a new generic attack vector. To use printbf in the wild an attacker will either have to disable FORTIFY SOURCE checking or get around the checks by placing lining up the format strings and placing them in readonly memory. The attacker model for printbf assumes that the attacker can use memory corruption vulnerabilities to set-up the attack or that the sources are compiled without enabled FORTIFY SOURCE defenses.

Git query language

In a repository path …. See more here. Gitql “Your query"orgit ql “Your query”. Commits author author email committer committer email hash date message full message. Select hash, author, message from commits limit 3 select hash, message from commits where ‘hell’ in full message or ‘Fuck’ in full message select hash, message, author email from commits where author = ‘cloudson’ select date, message from commits where date < ‘2014-04-10’ select message from commits where ‘hell’ in message order by date asc Questions? Gitql doesn’t want kill git log 😅. It was created just for science!! It’s read-only. Nothing about delete, insert or update commits The limit default is 10 rows It’s inspired by textql But, why gitql is a compiler/interpreter instead of just read a sqlite database with all commits, tags and etc? Answer: Because we would need to sync the tables every time before run sql and we would have sqlite bases for each repository.

Dependency hell is NP-complete

We’ll abbreviate package P version V as P:V. A dependency on P:V must be satisfied by version V exactly, not V-1 and not V+1. Given a 3-SAT formula, we can create a package F representing the whole formula, packages C1, C2, , Cn representing each clause, and packages X1, X2, , Xm representing each variable. If the formula is satisfiable, the satisfying assignment gives one way the package manager could successfully install F. Therefore, we’ve converted the 3-SAT instance into a corresponding VERSION instance with the same answer, which establishes that 3-SAT can be solved using VERSION, so VERSION is NP-hard. Implementations The assumptions above are quite minimal: packages have a list of dependencies, a package’s dependencies can change with its own version to version, a package’s dependencies can be restricted to specific versions of those dependencies, and it is possible for two versions of a package to conflict with each other. Some package managers might not allow a dependency to list a specific version, instead requiring a range, but we can easily change the version requirements 0 and 1 to ≤ 0 and ≥ 1. If package version selection is NP-complete, that means the search space of possible package combinations is too large and intricate for efficient systematic analysis; what about efficient systematic testing? If a search finds a conflict-free combination, why should we believe the combination will work? The absence of a version conflict may indicate only that the combination is untested. One way to avoid NP-completeness is to attack assumption 1: what if, instead of allowing a dependency to list specific package versions, a dependency can only specify a minimum version? Then there is a trivial algorithm for finding the packages to use: start with the newest version of what you want to install, and then get the newest version of all its dependencies, recursively. As the examples already hint at, if packages follow semantic versioning, a package manager might automatically use the newest version of a dependency within a major version but then treat different major versions as different packages.

comments powered by Disqus