Wayne Skipper // August 20, 2019

Open Badges Backpack 2.0

Mozilla-Badgr2.jpeg image

We’re pleased to announce that the final transition of the Mozilla Open Badges Backpack to Badgr will be complete on August 22, 2019. Dozens of digital badging systems that were formerly connected to the Mozilla Backpack will now be directly connected to Badgr. Existing Backpack integrations should continue to work seamlessly and will now support v2.0 badges. We’ve completed testing in Blackboard, Brightspace, Moodle, and numerous other platforms.

To understand what this important transition means for the future of digital credentials, let’s take a moment to understand how we got here.

A Well-Documented History

In 2011, the Mozilla and MacArthur Foundations announced the creation of Open Badges, a “project aimed to spark a transformation of how we recognize learning.” By the end of 2011, Mozilla and MacArthur had engaged over 300 nonprofit organizations, government agencies and others in a worldwide collaboration to demonstrate “the possibilities of an open credentialing ecosystem.” The first technical implementations were made available that same year, under an open license.

By 2013, numerous implementations of Open Badges had been created, including in Canvabadges (2011), Moodle (2013), and Blackboard (2013). By the end of 2013, over 1,450 organizations were issuing Open Badges. In 2014, Pearson released this video describing their use of Open Badges in their Acclaim platform.

In 2015, MacArthur tapped Concentric Sky to take stewardship of the Open Badges community and develop the 2.0 version of Open Badges. We did our work in the open, working with an array of early community members. We recorded our working group calls, took copious public notes, and made it all available as part of the Internet Archive. By late 2015, we’d developed a draft version of Open Badges 2.0 and released Badgr as an open source reference implementation of the specification.

At the beginning of 2017, Mozilla, MacArthur, and Concentric Sky passed stewardship of the Open Badges standard to IMS Global, where it remains today.

The Value of Open Infrastructure

Open Badges were originally designed to serve as a common language for describing skills and learning achievements. More than just a pretty picture, Open Badges contain metadata that allow digital badges to serve as independently verifiable digital credentials. Even the code used to verify digital badges is open source. This ability for independent verification of embedded metadata is what makes digital badges suitable to express a wide array of traditional credentials online in a common, machine-readable format.

Open Badges Backpack 2.0 image

As part of our Open Badges 2.0 work with MacArthur, Concentric Sky developed another technology standard designed to facilitate stackable credential connections between organizations. This standard is called Open Pathways, and it serves as a key technical foundation for initiatives including IMS Global’s Comprehensive Learner Record and Arizona State University’s Trusted Learner Record.

Where Open Badges allow for the creation of portable, independently verifiable digital credentials, Open Pathways allow for the creation of portable, independently verifiable digital transcripts - machine-readable transcripts that can be aligned to competency frameworks and skill taxonomies, such as those created by our work with the US Chamber of Commerce on the Job Data Exchange.

The ability to understand and describe the relationships between credentials is essential to designing a Future of Work that is open, scalable, and beneficial to all stakeholders. Rather than relying on the products of vendors that might go out of business or be acquired, our work aims to create open technology infrastructure that will stand the test of time and serve credential recipients over their entire lifetime of learning.

Open Badges Backpack 2.0 image

The Backpack can be seen in this context as a learner-controlled, cross-organizational credentials wallet where badge recipients can display badges from multiple platforms side-by-side, explore new Pathways made available to them by the credentials they hold, and see how their credentials map to job descriptions and admissions requirements for the next steps in their learning journey.

Looking Forward

As the worldwide digital credentials ecosystem continues to grow, we’re keeping pace. Our current work includes collaborations with new initiatives that seek to expand the usefulness of Open Badges and integration work to bring alignment between Open Badges and the Verifiable Credentials work happening at W3C.

One of our most exciting initiatives is our work on Badge Connect - a standardized, easy-to-use way for badge recipients to move their digital credentials between major badging platforms.

And we’re just getting started. Watch this space for upcoming innovations around digital credentials, identity, blockchains, and open skills taxonomies. If you have questions about the specifics of the Backpack migration, please visit the Badgr Knowledge Base or contact us.

Wayne Skipper // January 30, 2019

Is Blockchain Ready for Prime Time in Education?

blockchainready.jpg image

This post originally appeared in EDUCAUSE Review in January 2019.

With the proliferation of blockchain solutions in the education marketplace, it is incumbent upon decision makers to understand what they are buying. Yet, far too often blockchain solutions are used as a way for institutions and vendors to “check the box,” with no exploration of the long-term ramifications of using the technology.

Original Mystery

The origins of blockchain are shrouded in mystery. On October 31, 2008, a paper titled “Bitcoin: A Peer-to-Peer Electronic Cash System” was published by an unknown party using the pseudonym Satoshi Nakamoto, and on January 3, 2009, the Bitcoin blockchain came into existence with the mining of the its first block, known as the genesis block. Since that time, there has been much speculation around the inventor’s identity, and several individuals have attempted to claim the inventorship mantle. None so far has been able to provide convincing proof. To this day, the actual creator or creators of blockchain remain anonymous.

Decentralizing Trust

Regardless of the curious nature of blockchain’s origins, it seems clear that the technology has the potential to change the world, if only because distributed ledgers are well suited to solving problems of trust. As the technology gains steam, and as the boundaries between regulation and technology are solidified, we may start to see entire industries of middlemen become obsolete.

Blockchain-based decentralization allows us to design systems without a single point of failure, systems that eliminate data bottlenecks, and, perhaps more importantly, systems that survive the rise and fall of individual technology companies. This makes the technology ideal to disrupt a wide range of existing business models, at least in theory.

Temporary Integrity

Yet, for all its promise, blockchain technology is not without its drawbacks. It’s slow—perhaps too slow to support massive adoption. If a blockchain network becomes clogged, it could take days to validate transactions. Despite the claims of some supporters, it isn’t impenetrable—a recent successful “51% attack” on a major public blockchain allowed hackers to reverse transactions and transfer ownership of blocks. Because of this structural vulnerability, currency-based blockchains such as Bitcoin and Ethereum maintain their integrity only so long as it is profitable to mine their coins. How long will that be?

Perhaps the greatest risk comes from yet another hot new technology, quantum computing, which could render the cryptographic foundations of blockchain obsolete in as little as ten years. Since the initial release of Bitcoin, several new models of blockchain design have been developed that aim to address these limitations, each with varying degrees of success. However, the question remains: Are these the only vulnerabilities in major public blockchain models? The fact is, we simply don’t know.

Credentials, Not Currency

Despite the significant uncertainties surrounding the use of public blockchains, numerous vendors have begun to write education records such as credentials to the Bitcoin and Ethereum blockchains. Market forces, structural vulnerabilities, and new technologies, however, make it clear that no vendor can guarantee the long-term integrity of such records. By design, the integrity of the records is outside of any vendor’s control. Therefore, vendor claims that public, currency-based blockchains are suitable for the long term validation of “high stakes” credentials are highly questionable, if not fundamentally flawed.

This is not to say that using blockchains to store educational records is in itself a poor use of the technology. Instead, what is needed is an open technology ecosystem that combines public blockchains, private blockchains, and off-chain storage, combining the strengths of each technology to create a decentralized storage mechanism whose verification incentives are not tied to currency markets. This approach offers all the benefits of blockchain-powered record verification without the worry that external economic factors or new technologies might render education records corruptible—and without the need to trust in the continued existence of any single technology company.

In early 2018, Concentric Sky and partners BrightHive and the DXtera Institute proposed such a blockchain ecosystem, called EdRec. EdRec is a learner-centric, open standards approach to learning record storage “on the blockchain,” with self-sovereignty of learner data as its key design principle. The project’s goal is to create a privacy-focused open technology standard that any company can implement in their products.

The proposal was a winner of the US Department of Education’s Reimagining the Higher Education Ecosystem Challenge, and since then, the project has begun to attract numerous institutions and large employers that see the value of a vendor-independent, machine-readable lifelong learning profile based on open technology standards.

Look Before You Leap

When considering the application of blockchain technology in education, it’s important to keep in mind your organization’s long-term objectives. Blockchain technology is a hot topic, but you’re not going to miss the boat just because you don’t jump in first. It will take time for these technologies to reach the maturity, scale, and reliability needed for enterprise deployment in education. If you do decide to jump in now, make sure you understand the risks.

Wayne Skipper // December 4, 2018

LRNG + Badgr: Unlocking Opportunity Through a Shared Digital Backpack

pexels-photo-981781.jpeg image

This post by Connie Yowell and Wayne Skipper originally appeared on Medium in December 2018.

We’ve known for quite a while that it’s time to reinvent and reimagine the way learning is recognized and credentialed. With the evolution of learning, comes a sense of urgency to start recognizing learning that happens outside of the classroom. This need, coupled with an airplane conversation created the idea of Open Badges, a digitally verifiable record of achievement. Open Badges create a common language for learning that allows us to describe these learning experiences in the same way we describe academic achievements and industry certifications. While we both believe you can never replace educators or physical school buildings, we also know from experience that badges can unlock youth potential by creating connected learning experiences that connect young people directly to opportunity.

The history of badges, in large part, started with funding from the MacArthur Foundation. In particular, the 2013 Digital Media and Learning competition catalyzed dozens of projects that put badges into use in real learning environments. From that foundation, MacArthur funded Mozilla to build the Open Badges standard and a technical infrastructure that allows both traditional and non-traditional educators the ability to reward learners for a diverse set of skills like coding, editing and remixing. The primary feature of that infrastructure was a central service called the Mozilla Backpack, designed to help learners aggregate badges from multiple sources.

The emergence of digital badges revolutionized the way the world thinks about learning achievements. This led to rapid growth and widespread adoption as organizations sought to use the common language created by badges to break learning experiences out of traditional silos and put them directly into the hands of learners. This growth led to the need to improve the world’s badging infrastructure, and to that end the MacArthur Foundation tapped Concentric Sky. Concentric Sky was the lead author of the Open Badges 2.0 specification on behalf of MacArthur, and as part of our work developing the standard, we created Badgr.

Since its launch in 2015, Badgr has grown from a simple open-source project into one of the world’s leading credentialing platforms trusted by over 10,000 partner organizations around the world. Badgr was selected as the native badging system for Canvas, and serves up fully verifiable digital credentials for millions of users around the world. On August 15, 2018, Mozilla announced that it would retire the Mozilla Backpack and pass that role to Badgr.

Recently, Badgr and LRNG joined forces to unlock even greater opportunity for LRNG’s more than 50,000 users and allow for greater access to LRNG’s high-quality content. This, along with LRNG’s official Open Badge Compliant certification from IMS Global, means great changes ahead.

A New Kind of Backpack

This partnership will allow LRNG users to curate badges from various sources, then share them out selectively. This setup fulfills the initial vision of the MacArthur funding, creating a true foundation for lifelong learning, making it portable and accessible from anywhere so the learner can update or move them around at their discretion.

When badges are earned on the LRNG platform, they’ll now be distributed through Badgr and will live in the Badgr ecosystem. This means users can now stack badges with those earned through outlets like Digital Promise, Canvas and thousands of academic institutions and organizations. The integration will create the ability for users to access their entire Badgr Backpack while on their LRNG profile — the same Backpack view users see in Canvas and other applications.

This partnership not only affects youth currently on the platform but also opens up the content readily available on LRNG.org to other young adults. For example, an instructor designing a Canvas course could build off of LRNG badges, or create a competency assessment that pulls through to an LRNG badge in order to support a playlist.

What’s Next?

LRNG is gearing up for Summer 2019 and thinking through what functionalities and API’s can support greater opportunities for youth and further connect in and out of school learning. Through our merger with SNHU we’ll be building true pathways to competencies that can count for college credit or career skill attainment.

Already, we’re working with Classcraft on an API that has allowed their educators and classrooms to access LRNG’s playlists. Students can now complete our playlists and then unlock points and prizes within the Classcraft game, creating more opportunity for engaged classrooms and ensuring that learning counts — no matter where a student is.

LRNG has also created an API to integrate into the One Summer Chicago application process so that badges can be applied to level up a youth’s internship opportunity for the summer. Youth get a more advanced internship based on the number of badges they completed in the year prior. And, towards the end of January 2019, we’ll be integrating into the Crisis in Space game with content and playlists centered around critical thinking.

Both of the teams at LRNG and Badgr are excited to bring our users and members the new opportunities available thanks to this integration. Stay tuned and follow us on Twitter at @WeAreLRNG and @BadgrTeam.

Jeremy AAsum // October 3, 2018

Modular Design

An illustration of a designer and developer both speaking about modular design

A project at Concentric Sky applies the expertise of user experience design, graphic design, web development, software engineering, quality assurance testing, and systems engineering. Our teams have diverse backgrounds and a breadth of expertise. However, these various disciplines mean that we have different approaches to understanding and solving problems.

Modular Design is how Concentric Sky empowers each discipline to use the same tools for designing and building interfaces and work towards project goals in tandem. This results in a more agile approach with less of the drudge of waterfall – all while maintaining an enjoyable user experience and delightful presentation. It sets the team’s sights on understanding the problem and testing solutions early and often. This is accomplished by eliminating unnecessary documentation and pedantic review cycles.

To understand how this is done we must first review the role of design and the reality of software development. Here we’ll find some ingrained assumptions and demonstrate how Modular Design challenges these while offering a clearer path forward.

Modular Design image

Design Systems

Atomic Design, BEM, and a variety of in-house solutions are a sign that traditional approaches to software design have shifted towards design systems rather than design mockups. There are apparent benefits:

  • Consistency: the user gets a cohesive experience, especially across an ecosystem of products. Interfaces become more intuitive while enforcing brand standards.

  • Manageability: a design system is a bottom-up approach that allows us to think of our interfaces as a collection of parts. Changing those parts will predictably impact the interface when it comes to rebrands, white labeling, and enhancements.

However, executing a successful design system is difficult and requires us to challenge a lot of ingrained assumptions.

Modular Design image

At Concentric Sky, Lean UX has accentuated our need for design systems. To be successful, we need to test our assumptions early and often, and respond to new user data quickly. From a technical perspective, styleguides such as SMACSS (similar to BEM) and concepts from Web Components have informed how we construct our interfaces. Our goal is to unify these approaches and meet the requirements of designers and developers alike.

An iterative design process, which allows us to test and tweak often, is unique to software. Software tends to be large and complex, yet its strength is in its flexibility. On the other hand, disciplines like architecture and print design remain rigid and require certainty before execution. This understandably leads to longer design cycles.

Modular Design image

Design systems fail when the designer and the developer are not unified in their approach. Not using the same design system is akin to having two team members who are fluent in different languages: great for them and anyone who speaks the same language, but requires heavy translation when speaking to each other. We wanted to remove the need for translation and empower designers and developers to use a singular design system that persists from prototyping through launch – and beyond.

A successful design system is:

  • Iterative: global changes should be trivial as requirements change and our assumptions are tested.
  • Extensible: we need building blocks to start from (instead of a blank page) to solve new problems. These building blocks should be flexible and able to adapt to nuance.
  • Ubiquitous: “Call to Action Card” means the exact same thing to designers and developers.

From Print to Software

Mockups are necessary to define the final product, however we shouldn’t confuse the means with the end. A mockup traditionally serves these roles:

  • For designers: the canvas where their ideas come to life.
  • For stakeholders: ensure that qualitative and quantitative goals will be met.
  • For technicians: detailed specification on how to reproduce the design in its final medium.
Modular Design image

A print designer’s mockup represents physical plates and there is virtually a one-to-one relationship between the mockup and the result (what you see is what you get). Software, on the other hand, has a more subtle relationship between the mockup and its final medium. Though it’s changing, most design software still outputs specification for technicians based on the concept of the printed page.

Modular Design image

In software a mockup represents a view, and views are constructed from smaller pieces we call modules. For a developer there’s no equivalent to a plate. Modules are defined and then assembled together to create a view.

Modular Design image

Modules are necessary because most developers follow the DRY principle (do not repeat yourself). Instead of specifying how an element should appear multiple times within a product (as would be the case with each plate in print) they will define it once, and then point to that definition whenever displaying that element. The DRY principle allows for rapid iteration because we primarily deal with declaring and updating definitions rather than views. Modular Design embraces the DRY principle and empowers designers to specify how their design will be produced in the final medium: code.

Modular Design image

Enter Modular Design

Modular Design allows designers and developers to have objective discussions about the impact of design decisions. Modular Design is a language for defining and discussing the modules we use to create our views. Modules aim to solve common user experience problems in a generalized way. Together, with content, modules create a narrative that addresses context-specific problems (the view).

Modular Design image

Nesting and Recursivity

Modules can be nested and recursive which allows us to define every aspect of the interface as a module. As requirements change, and assumptions are challenged, modules adapt holistically but independently from each other.


If a change is identified in code by the developer the designer can locate the same module in their design software and mirror the updates. This is because Modular Design is an approach that works in any context. It is not specific to design software, CSS, or a JavaScript framework.


The goal is to properly encapsulate every element of the interface. Encapsulation occurs based on the problems the module solves, rather than appearance or branding concerns alone. New modules are designed when gaps in our problem solving are revealed through testing or changing requirements.

We’ll draw lines around an interface to determine where meaningful encapsulation of modules exist. Once a module is encapsulated it cannot affect modules outside of it, or other modules within it.

Counterintuitively, agreeing to more constraints makes our interfaces easier to manage and extend. We can make design decisions knowing what the impacts will be in code. This saves time and money by eliminating the need to always rethink portions of a product, and enhancements are easily translated through the whole interface.


It’s best to organize and name modules based on the problem they solve. “Header” is a better name for a module than “Portfolio Title” as Header implies it can be used in more than one context. Along with the name it should be clear what the properties of the module are so that our design system is always up-to-date and evolving with the product.

A Module’s Properties

A module is made up of three properties: rules, states, and submodules. Rules describe the module – its appearance. States and submodules override the default rules.

Modular Design image

Rules – The “Button”

Rules describe what makes the module unique and repeatable within the interface.

It’s okay to break the rules

When the only difference between two modules are a few rule changes then they’re likely the same module. Those changes should be documented as a submodule or state.

Modular Design image

States – The ”Button is Disabled” state

States indicate a module’s behavior (what it can do). A module changes its state after being loaded into the view. Since modules are mobile first, any responsive changes are considered a state. States can also chain together – such as a hover state on a disabled state.

States always indicate modules

In order for an element in our interface to receive a state it must be a module (and thus be encapsulated and documented). Identifying states is the most useful tool for identifying modules.

Modular Design image

Submodules – The ”Button Secondary” submodule

Submodules change the rules of a module, along with its states. This allows us to extend our interface without the need for heavy lifting. Submodules tweak and remix modules in a deliberate and organized way. When we encounter nuance such as the need for more or less emphasis, new messaging, or adapting to content changes submodules are the answer. They allow us to keep our modules focused on the problems they solve while making them optimized for new contexts.

Submodules are like copies or re-skins of a module which selectively overrides certain rules. Submodules can also be chained together – such as Button Secondary along with “Button Uppercase”.


Modular Design works only when an entire team buys into it. This means something different for each discipline but the above properties are universally accepted and useful. Our team’s shared understanding is what allows for a cohesive design system for the whole process – from prototype to launch.

We’ll continue to explore various aspects of applying modular design in future articles such as how to organize design files into modules, develop modular code, and facilitate a process of rapid iteration.

Modular Design image

Thanks to Adam Barton, Nic Marson, and Jesse Holk who helped with the content and illustrations for this article.

Wayne Skipper // September 26, 2018

Can Education Keep Up with Technology?

Keeping up

This post originally appeared in EDUCAUSE Review in September 2018.

Trends such as automation and artificial intelligence (AI) are not just concerns for the future. With machines expected to handle over 50% of workplace tasks by 2025, it is imperative that we begin to take seriously the rapidly accelerating pace of technological change. Even jobs that have long been considered to be the exclusive purview of elite graduates—doctor, lawyer, hedge fund manager, even CEO—are now being impacted.

A recent report from Burning Glass and the Strada Education Network finds that 43 percent of recent college graduates are underemployed in their first job out of college. With job-hopping becoming the primary avenue by which workers can obtain a significant pay raise, it should be no surprise that data from the Bureau of Labor Statistics highlight another accelerating trend: a decline in the average length of employment.

Despite record-low unemployment, employers report increased difficulty in hiring skilled workers. Call it a skills gap or not, it seems clear that some sort of gap exists in our ability to match workers to jobs at the pace needed by employers. This is another trend that seems likely to accelerate as employers seek to increase already record high levels of worker productivity.

So how can we prepare students for a world in which the skills they need to succeed in the workforce change so rapidly while best matching them to the jobs available to them today? The solution lies in the quantification of learning itself.

Machine-Readable Learning

Historically, student learning achievements have been represented in monolithic, analog formats such as degrees, which derive their value to a large degree from the reputation of the issuing institution. However, a degree communicates to an employer neither the specific skills a student has learned nor what level of mastery a student has demonstrated. Instead, it primarily communicates the amount of time the student has spent earning the credential.

Competency-based learning is an alternative to time-based units of measure that offers a much more granular understanding of student learning outcomes, as well as facilitating the design of programs that can be personalized to suit different levels of learning ability. Individual competencies are often arranged into systems called competency frameworks, for example Common Core or the NACE Career Readiness Competencies.

The Competencies & Academic Standards Exchange (CASE) standard from IMS Global gives us the ability to assign a web-based “permanent address” to each competency in a framework, moving us beyond simple keyword matching when discussing skills and into the realm of machine-readable taxonomies.

Common Building Blocks

Open Badges are designed to serve as a common language for describing learning achievements—digital building blocks that allow us to talk about learning achievements from any source in the same way. Everything from annually expiring CPR certificates to industry certifications and even academic credentials can be represented as digital badges. Badges allow us to create digitally verifiable credentials that represent mastery of skills tied directly to a competency’s permanent address in a competency framework.

With the advent of Open Pathways (now part of the Comprehensive Learner Record), we can link Open Badges from any issuer together into stackable learning pathways. This gives us the ability to create portable, machine-readable transcripts of a student’s learning journey that align student learning outcomes with skills attainment. Two primary benefits of this approach are the ability to meaningfully incorporate prior learning assessment and the simplification of transferring academic credit between institutions.

Machine Teaching

A transcript is a look backward on a learner’s journey. A well-structured digital transcript allows us to use machine learning to look forward. By comparing a student’s progress on a learning pathway to the progress of others, we can adaptively guide the student along an optimal path to success, recommending individualized content along the way.

We see steps in this direction in the trend toward personalized learning, but the open ecosystem formed by the combination of CASE, Open Badges, Open Pathways, and the Comprehensive Learner Record gives us the tools we need to implement this at scale—and in a way that ensures transparency and record portability.

A new initiative called EdRec adds a Blockchain-based privacy protection layer to the ecosystem that ensures student learning outcomes data is treated as the property of the student it describes.

Self-Driving Organizations

In the same way that algorithms can manage tasks ordinarily associated with C-level employees, today we see workers around the world being managed by algorithms as part of the gig economy. As we extrapolate this trend deeper into organizations, we see that a surprising array of tasks can be automated and directed by machines.

Soon, we’ll begin to see entire organizations in which the role of workers is no longer to make decisions but instead to set parameters and provide checks and balances. And this may not be all bad for workers. If we carefully design our systems to eliminate unconscious biases, we might find that letting the machines organize our work is beneficial to everyone, perhaps even enabling us to finally address some of the most challenging problems facing the world today.

Enabling Beneficial Change

Given current trends, it seems inevitable that machine-driven teaching and real-time skills matching will become an indispensable part of the future economy. In fact, it might be the only way to generate continued increases in worker productivity. We can no longer expect employers to interpret analog representations of learning achievements. We must instead focus on machine-readable representations backed by an ecosystem of open technology standards.

We have the technology. Our challenge is to ensure that the systems we put in place to address these trends are designed to benefit students and workers, not take advantage of them. We’ve seen clearly the perils of allowing corporations to productize personal data in private data silos. We need to take thoughtful action now to ensure a better future for students. The best way that institutions can help is to require vendors to implement open technology standards as a requirement of doing business.

Together, we can not only embrace the future of work, we can help shape it.

Wayne Skipper // August 28, 2018

Reimagining the Higher Education Ecosystem

EdRec image

UPDATE: Our proposal was selected as a winner of the Challenge!

This June, the US Department of Education issued a challenge seeking the best ideas worldwide for how to align the US postsecondary educational ecosystem to the future of work and life.

Concentric Sky, along side our partners BrightHive and the DXtera Institute, responded to the challenge with a vision for a future designed around decentralized, fully portable, vendor-agnostic, self-sovereign student records.

We call it EdRec - and it builds off the work of each of our organizations in the realms of open technology standards and the design of interoperable learning systems.

Our elevator pitch: Your “permanent” educational record has never been truly yours. Wouldn’t you want to control it, control access as you progress from one transition to the next, and optimize it for your desired success? We’re rewriting the rules of the game for personal education data by empowering learners with control of their own permanent education record across institutions, applications, and platforms.

On August 27th, our proposal was selected as a finalist and the DOE opened the challenge for public voting. Voting is open from August 28-September 27, so today we’re asking you to get involved.

Here’s how you can help

  1. Visit the challenge website.
  2. Click “Invest in this project” in the lower left. You’ll be prompted to create an account.
  3. Invest 500 credits in the EdRec project.

We believe that our vision for self-sovereign education records doesn’t have to wait until 2030 to be realized. With your support, we can demonstrate fully portable, vendor-agnostic, self-sovereign student records in three partnered communities within 18 months. If this is a future you’d like to explore with us, please vote today.

About the EdRec team

BrightHive is an impact-focused data analytics & technology company that provides open source software and services supporting public-private data collaboratives. BrightHive’s data trust model along with its data collection, integration, analysis and governance products help transform the way communities of social services providers share data, measure outcomes, and make decisions, ultimately improving individual efficacy and equality of opportunity.

Concentric Sky is an award-winning software development firm with a 13 year track record of delivering innovative software solutions to complex technology problems. Core contributors to a number of open standards at standards bodies including W3C, IEEE, and IMS Global. Makers of Badgr.

DXtera Institute is a growing international, nonprofit, member-based consortium of higher education professionals collaborating to remove technology barriers so that institution leaders, faculty, staff and students have efficient access to information needed to transform student outcomes.

Cale Bruckner // August 17, 2018

Summer Internship Experience at Concentric Sky

ben-white-197668-crop4.jpg image

Header Image: Photo by Unsplash contributor Ben White

Early in 2018, I was approached by Juan Carlos Garcia, an MIT student studying Computer Science, about possible summer internship opportunities. Juan Carlos sent me a well written cover letter, a professional resume, and a strong justification for why he’d make a great intern. We don’t have a regularly occurring internship program, but on occasion, and for the right candidate, we’ll open our doors and make the time.

Juan Carlos expressed an interest in the education work we’re involved with and was especially interested in contributing to Badgr, our open platform for issuing digital credentials. We offered Juan Carlos a position on our Badgr Team for the summer and we’re glad we did. Juan Carlos contributed code and ideas to Badgr, and the team appreciated the fresh perspectives he offered. His last day was earlier this week, so we thought we’d post this story, from his perspective, about his summer with the Badgr team.

“My internship with the Badgr team at Concentric Sky this summer gave me the opportunity to explore and learn web development as a software developer on a team with passionate individuals. Having never written a line of web development code prior to the start of the summer, I can now say that I am well versed in the Django and Django REST Python frameworks as well as comfortable with working in TypeScript, HTML, and CSS. It was at first challenging to digest a whole website’s worth of code that was foreign to me, but I’m grateful to take away from this internship the skill and knowledge necessary to deploy a website. Whenever a confusion or block in my workflow did surface, the senior developers on the team were only a message or desk away from helping me out.

My undergraduate studies have been deeply theoretical so I see shipping code to a real industry website as a milestone in my career. It is exciting that I can go to my web browser and see the effects on a website from code that I wrote. This internship also gave me insight into what working in the software industry is like. By participating in team meetings I was able to observe how differing roles on a team come together to form a successful product.

Working as a web developer at Concentric Sky for the summer has equipped me with more tools in my technical bag and has allowed me to engage with real product development in a way that I have not been able to in my years at MIT.” - Juan Carlos Garcia

Thanks for spending some time with us this summer, Juan Carlos, and thanks for your contributions to Badgr. We hope you have a great senior year!

Wayne Skipper // June 4, 2018

The Changing Landscape of Credentialing in Education

gMAY18_kids.jpg image

This post originally appeared STEM Magazine in May 2018

The way that we assess learning achievements is changing. Our current system of grouping learners into age-banded classes and ranking their performance using broad, letter-grade categories is becoming increasingly outdated. Letter grades have little value to learners, who can still fail to master key learning objectives despite getting good grades. Letter grades also have little or no meaning to employers, who are looking for a more granular understanding of a candidate’s mastery of specific skills. In order to better serve learners, we need to not only help them master necessary concepts, but also to demonstrate that mastery in the context of a shared system of skills and competencies.

With this in mind, forward-looking institutions are beginning to explore Competency-Based Education, based on open standards which allow us to create a shared, universal language to allow virtually any learning achievement to be understood by other organizations. Using these open standards, the achievements of learners will soon be transferable between institutions and understandable in the context of employment and lifelong learning opportunities.

The work on this new digital environment for learning began about seven years ago and is proceeding quite quickly.

A Very Brief History

In early 2011, the white paper “Open Badges for Lifelong Learning” attracted significant attention as part of the Digital Media and Learning Competition. Soon thereafter, Arne Duncan, then U.S. Secretary of Education, launched the Digital Badges for Learning Competition. This competition resulted in an estimated 300+ non-profit organizations, government agencies, and informal learning entities getting involved in digital badging.

The momentum continued. In 2012, The Mozilla Foundation released an open-source Open Badging Infrastructure which served as the seed for numerous projects. The next year, over 100,000 badges were issued as part of the Chicago Summer of Learning. During the Summit to Reconnect Learning in 2014, a non-profit organization called the Badge Alliance was launched to help coordinate the work of the rapidly growing Open Badge ecosystem.

Fast Forward a Few Years

To date, nearly 15 million badges have been issued worldwide. Issuing organizations can be large or small. Even school systems and individual teachers have begun issuing digital badges across a host of platforms. Schools and organizations have found that they can now issue digitally verifiable credentials to their students, staff, and users to represent learning achievements of many kinds. Teacher professional development is a common use case, as well as preparing young people to enter the workforce.

The Open Badges standard allows learners to understand and share their learning achievements from formal, informal, and self-directed settings across multiple platforms using a common language. Open Badges allow almost any learning experience to become valuable in the right situation. Everything that a person learns can be used to create a rich portrait of them as a learner, a portrait that the learner controls.

Benefits of an Open System of Credentialing

Even if teachers are working with younger children, there are still benefits to creating a digital portrait of their learners. Take, for example, a student who plays video games. This example learner has earned in-game achievement badges for flight simulator games and goes to air shows where he or she gets digital badges for participation.

While on the surface these badges might been seen to have little relevance to their education, they can in fact be quite meaningful. The average high school student gets less than an hour of face time with a school counselor. What are the chances that our example youth’s interest in aircraft is going to surface in that meeting where it might be used to help motivate the learner?

With this data in hand, a counselor could have this type of discussion: “Did you know you could work on airplanes for a living? Did you know you could fly airplanes for a living? Or even design them? Let me show you how.”

So, we can see that this seemingly trivial information can be made useful to a learner during the most impressionable part of that young person’s life. We can now use a learner’s own interests for their benefit, which is something we couldn’t do before. We didn’t have the information or a way to understand different kinds of learning. Badges can help solve one of the most fundamental challenges of education – how do we identify the interests of students and motivate them to learn?

The important thing to understand about badges is that they might be analyzed in contexts that are not immediately apparent to the issuer. Together, a collection of badges paints a rich portrait of a learner. This portrait can help educators understand the learning progress of students in an immediate and granular way. And it can help a student track his or her own learning achievements in school and life - and share those achievements with colleges and potential employers.

Future Benefits

As colleges and universities are redefining their value in a global marketplace, badges will allow them to award more granular credentials that are of value to both students and employers. Currently, students who do not complete their programs have little to show for their work, even if they completed 99% of their requirements. There’s also currently no way to describe informal learning achievements alongside those that arise from more formal environments. Badges not only benefit learners as they enter the workforce but will also enhance the value of higher education institutions that may be struggling to show value in a world of increasing tuition and rising student debt.

In coming years, companies will hire employees globally, using verifiable skill sets to identify their remote workforces. Learners will take advantage of new learning pathways, helping them aggregate skills from many learning sources and allowing them to build a complete profile of the skills and achievements. This is an exciting time for education. We’re seeing a transformation from a successful but antiquated letter-grade system to a modern credentialing system that better serves learners and their future needs. Learners, schools, and businesses will all enjoy the benefits of an open system that more accurately describes learning achievements and matches them to the skills needed in the workforce.

Wayne Skipper // April 20, 2018

Introducing Badgr Pathways

pathway-large38.jpg image

UPDATE: We’ve posted a video introduction to the system.

We’re pleased to announce the launch of Badgr Pathways. With Pathways, badges from any Open Badges compliant platform can be stacked together in alignment with competency frameworks. Learners have an easy-to-understand map view of where they are in a curriculum. And just like they can share badges, learners can share their Pathway progress - including the steps that they have not yet completed. This allows a learner to share the directionality of their journey, not simply the credentials they already have.

Pathways features an integration with our BadgeRank service, allowing badges from the major badging platforms of the world to be searched within a single interface and added to a Pathway. Badges can be inspected and evaluated on criteria such as the skills they are tagged with or the badge’s alignment to a framework such as NGSS or the Georgia Standards for K-12 Mathematics.

Learners can see where they are in a learning pathway, as well as what they need to do next. More importantly, learners can see all Pathways upon which their credentials fall - answering one of the most fundamental questions in the digital credential space: Why would a learner want one? Now we have a meaningful answer: To see what new opportunities it unlocks for them.

Because Badgr Pathways is based on our proposed new Open Pathways standard, Pathways can be stacked across organizations allowing the creation of data-driven bridges between the programs offered by education institutions, employers, and organizations that provide alternative credentials. Open Pathways allows us to reimagine the learning pathway as an open, portable data object - just like badges themselves.

Here’s an example Pathway in action (notes below):


Things to note in this example:

  • We’re stacking badges from 10 different Issuing organizations across 3 badging platforms (Badgr, Credly, Acclaim).
  • The core curriculum portion of the Pathway is aligned to the California CTE Model Curriculum Standards for Biotechnology.
  • This pathway demonstrates how you can combine standards-aligned academic curriculum with alternative credentials from external organizations.
  • The Biotech Lab Assistant Certificate can be earned via a badge from a local high school program, illustrating cross-institutional program stacking.

Stacking credentials is no longer just a naming convention - and sharing badges to a social network is no longer the end of their usefulness. With Pathways, we can move beyond discussing static snapshots of learning and begin guiding learners to achieve better outcomes - no matter where their lifelong learning journeys may take them.

If you’d like to give Pathways a test drive, head over to Badgr and set up a free account. Create an Issuer and click Manage Pathways to get started.

We hope you enjoy using Pathways as much as we’ve enjoyed creating it.

Cale Bruckner // April 19, 2018

Experience Oregon Tech, The Elevate Edition

Elevate_Edition.jpg image

Header Image: Photo by Unsplash contributor Justin Luebke

Education is in our DNA at Concentric Sky. You can see it in our work, our company culture, and in how we invest. We live it, breath it, and give back to it everyday. I can honestly say that this company wouldn’t exist if it weren’t for our shared passion for contributing to this space.

Investing in education programs, with our time or money, is one of the ways we give back. We look for programs that are outcomes oriented and often focused on closing the skills gap. Elevate Lane County is a great example of a local program that met these requirements.

Elevate Lane County is helping Lane County high-school students see what life after school could be like through job shadow and internship programs with companies offering high-demand, high-wage jobs in our community. It’s a great way to give kids a taste of what’s ahead and a reason to stay in school.

We’ve had a number of students job shadow our employees and it’s always a mutually beneficial experience. The students love interacting directly with our engineers, designers and project managers and our employees feel good about having had an opportunity to share some of what they’ve learned with an impressionable young person considering career paths. Below, I’ve included a letter a student recently sent us after having completed a job shadow. You can see the impression the experience had on this student.

Experience Oregon Tech, The Elevate Edition image

Thursday, May 17th, more than 150 students from all over Lane County will visit our offices as part of the Experience Oregon Tech, Elevate Edition event. We can’t wait and we’re looking forward to sharing more of our experiences, and hopefully a bit of wisdom, with this new generation of learners.

Wayne Skipper // April 16, 2018

Creating an Open Pathway for the Next Generation of Learners

Open Pathways Next Gen image

This post originally appeared on Educase Review in March 2018. Header Image: Lightspring / Shutterstock © 2018

I think we can all agree that the job market has changed to such a degree that our traditional methods of preparing learners for the workforce and assessing their career readiness are at a crossroads. Ensuring that these methods appropriately prepare learners for the workforce is as vital to higher education institutions as it is for employers. Questions about the efficacy of existing methods leave students wondering whether the value of an undergraduate education is even worth the (potentially quite high) cost of admission. Employers are searching for specifics — not necessarily specific academic achievements, but specific skills, experiences, and attributes that will increase prospective employees’ likelihood of success.

Completing a four-year program is certainly a notable learning achievement. However, hiring managers’ decisions are increasingly less influenced by a credential some see as having a too-general focus. The shift toward a free and open system of verifiable learning achievements has already begun in earnest, and semantics aside, badges or microcredentials will be the gold standard for learners to create their own pathways to the careers of the future, some of which have yet to be imagined.

Back in 2011 when the Mozilla Foundation created a technical standard called Open Badges, it established a new way to digitally verify learning achievements. The idea was brilliant in its simplicity, and it set the stage for a type of learning assessment that can follow learners from cradle to grave.

Open Badges are based on an open standard and as such can be issued and received by anyone at no cost. Badges can be issued by anyone to anyone, for any reason, and there are multiple places online where you can quickly and easily become an issuer. For higher education, badges represent an alternate mechanism of documenting learning achievements that allows institutions to show value to students and to employers who demand more information than a simple letter grade can provide. To be sure, as a community we have a few details to iron out before we can flip a switch and make wholesale changes to the traditional measures of learning outcomes. But to quote Bob Dylan, the times they are a-changin’.

Technically, an Open Badge is a digital image file with additional, non-image data added to the file at the time of creation. The descriptive metadata includes the badge name, the issuer, the earner, and the criteria met to earn the badge. It may also include a URL associated with the issuer, a link to evidence of learning achievement, and the level of alignment to educational frameworks.

Open Badges can be issued in a variety of simple ways. Typically, you enter an identifier (usually a learner’s email address) into the issuing platform and the badge is sent to that identifier. Badges can also be issued in the Badgr system by scanning a QR code with a cell phone or retroactively through analysis of learner performance data captured via xAPI. With the robust new Open Pathways standard, learners can navigate learning pathways and stack badges to earn master badges, indicating progressive achievement. Perhaps more importantly, Open Pathways allows learners to discover new learning opportunities based on the credentials they already hold.

Learning takes place on many levels and in many environments. Open Badges can be used to a signify a variety of learning events, such as viewing a museum science exhibit, mastering a middle school core competency, earning a high school diploma, or earning an advanced degree or technical certification. A traditional diploma indicates only graduation. Open Badges have the capacity to provide a much more granular picture of a learner’s achievements and skills from early childhood through adult learning.

For employers, the utility and quality of a badge are easily determined by verifying the issuer and the criteria required to earn it. Concentric Sky has also created the BadgeRank service, which is made available as part of Badgr. As with a Google search, anyone can search badges across most of the world’s digital badge platforms, see those results ranked by relevance, and select the resulting badges for inclusion on a learning pathway. Employers, who increasingly search for verifiable skill sets instead of degrees, can review an applicant’s badges to quickly and accurately determine if the individual is the right fit for a position. Applicants can determine exactly which skills employers are looking for to better direct their learning efforts.

These are the sorts of developments that are needed to more effectively connect education outcomes with workforce development, and at Concentric Sky, we believe firmly that this work should be based on open standards and open technology whenever possible. Moreover, we believe that a focus on open standards will enable Badgr and the broader digital badging movement to better meet the needs of the various stakeholders at the rapidly evolving nexus between education and the workforce. As we’ve seen with the growing use of open standards across numerous industries, interoperability is a crucial first step towards creating data-driven practices that can help address questions of efficacy for students, educators, and employers.

Wayne Skipper // April 16, 2018

Why Do Open Technology Standards Matter in Education?

Why-Do-Open-Technology-Standards-Matter-in-Education.jpg image

This post originally appeared on Edarabia in April 2018.

What are open technology standards? A great example of open technology standards is the Internet. The Internet is made possible because a set of common, open standards and a common language called HTML (Hypertext Markup Language) allow devices, apps, and services to work together from anywhere in the world. Without open standards, you probably wouldn’t be reading this or anything else on the web.

In the world of technology, open standards give people from anywhere in the world the ability to interact, share, and collaborate through compatible technology systems.

In education, open technologies refer to open source software, open standards, and open hardware that are used by leaders to plan, develop, and evaluate process associated when using open technologies.

Let’s start simply and talk about what the word “standard” means in the world of technology. Picture a bicycle: just from the name, you know that something needs to have two tires to be considered a bicycle. One tire makes it a unicycle, and three, a tricycle. The concept of a technical standard is similar to the idea of a document stating bicycles need to have two tires, except it gives rules for technical systems. Put more abstractly, a standard is a document with information and criteria about how a technical system ought to function to be considered an example, or implementation, of that standard.

Technical standards help end users to know what to expect from a product. Let’s revisit the bicycle example again. We can expect every implementation of the bicycle standard to have some traits in common: two tires, a seat, pedals, a braking system and handlebars for steering. Similarly, technical standards help end users know the basic features that should be available to them in any given implementation.

We can also expect that if we want to swap out the seat, the part of the frame that holds the seat stock is the right size to accept other seat stocks. If we need to change the wheels, we can expect the frame is built to attach to most bicycle tire rims. In this same way, technical standards ensure a consistent structure in implementations so that other technical systems can predictably interact with them. Being able to interact predictably with other systems is called interoperability.

Although they are very useful, some technical standards are not available to everyone. Closed technical standards are not publicly available and require the purchase of a license if you want to build software that implements the standard. A closed standard allows the owner to charge for their product and control exactly how it is used.

An open standard is available to everyone and anyone may implement it without needing to purchase a license. Open standards are usually being continually reviewed and improved upon, so they encourage innovation and cooperation in the tech world. Since they are open, they benefit everyone instead of being limited to just the people who purchase a license. Open Badges is a digital credential and an open standard which signals learning achievements.

Cale Bruckner // January 9, 2018

Concentric Sky Supports Apprenti Workforce Program

Apprenticeship! image

Talent is, by far, Concentric Sky’s most valuable asset. Like all companies in our sector, our success hinges on our ability to attract and retain that talent. That’s why Concentric Sky became one of the first Oregon companies to support the Apprenti program, the nation’s first registered tech apprenticeship program.

Apprenti is an industry recognized, state and federally accredited program that was created in Washington. Apprenti trains future tech workers with an emphasis on underrepresented groups including women, minorities and veterans. In Washington, Apprenti successfully placed 100 apprentices into jobs at companies like Amazon and Microsoft. Late last year, Apprenti announced that would expand into Oregon, Michigan, and California.

Concentric Sky Supports Apprenti Workforce Program image

Cale Bruckner, President Concentric Sky, with Governor Kate Brown, Eugene Mayor Lucy Vinis, and other community leaders at the Apprenti signing event in December.

When we were first introduced to the Apprenti program, we immediately saw its potential to affect positive social change in our community. By creating pathways for underrepresented groups to gain training, certifications, and placement in the tech industry; Apprenti is making some of the best paying jobs in Oregon available to Oregonians that have struggled to get the necessary training required to take on an entry-level tech job. Oregon tech employers (including Concentric Sky) have been struggling for years to find the talent needed to fuel their growing businesses. The Apprenti apprenticeship program gives Oregon tech employers a way to play an active role in solving this problem; creating a 21st century workforce by providing training to the people that are available now, for the jobs that are available now.

Apprenti, along with local partners Lane Workforce Partnership and the Technology Association of Oregon, hope to start training the first group of Apprenti apprentices in March. Concentric Sky has agreed to be one of the first Oregon companies to take on an Apprenti apprentice. The goal for year one is place 10 apprentices in Lane County tech companies.

We’re proud to be one of those companies - so much so that we’ve also stepped up to help develop the curriculum and training programs that will be used to train the Apprenti apprentices. We’ll be using our expertise in the industry to help improve outcomes and to provide support for the apprentices going through the program.

You can learn more about Apprenti, or apply for an apprenticeship, on the Apprenti website. Join us in supporting this innovative and proven workforce development program.

Kim Hammond // May 25, 2017

Building Cross-Institutional Learning Pathways with Badgr

Cross-Institutional Pathways Cover image

Due to popular demand we’ve scheduled a second webinar on this topic on September 15 at 9am Pacific. Please join us to learn more about the future of digital credentials. Click here to register.

Last week (May 2017) at the Learning Impact Leadership Institute in Denver, the Badgr team had the opportunity to present our work on a new open technical standard called Open Pathways. This new standard fills a gap left between Open Badges and other interoperability standards such as IMS Global’s Extended Transcript. It gives organizations a new way to make their badges valuable by describing how those badges relate to other digital credentials as part of a learner’s lifelong learning journey.

Building Cross-Institutional Learning Pathways with Badgr image

While the idea of a learning pathway is not new, using Open Badges from multiple sources to create discoverable pathways linked directly to outcomes is revolutionary. In addition to the open data model that backs the new standard, the Badgr team has created an easy-to-use editing and pathway visualization tool within Badgr to simplify the process of creating learning pathways and tracking learners’ progress.

To address the demand for information on this new standard and how it can be leveraged by existing badging programs, we’ve organized a Webinar to explore Open Pathways in Badgr and explore a few real-world use cases.

Please join the webinar to learn about Building Cross-Institutional Learning Pathways, July 21, at 9 am Pacific. Contact us to request an invitation.

Badgr Team // May 4, 2017

Submitting Evidence for Open Badges in Canvas

Evidence strengthens data image

Our Badgr for Canvas users have been asking for the ability to have learners submit evidence of their achievement as a requirement to earn a badge - and in the latest Badgr for Canvas update, we’ve delivered. Learners can now submit a URL as evidence and automatically earn a badge when they meet the module completion requirements.

Badgr for Canvas can now be added as an “External Tool” to a module in a Canvas course. Once added, this module item will display the badge the learner can earn and an optional field for submitting a URL for completion evidence. Connecting evidence to a badge increases the value of the badge to the earner and reinforces the criteria.

Key advantages:

  • The learner gains the ability to see the badge they can earn from within the module.
  • The instructor gains the ability to optionally require an evidence URL submission to earn a badge, insuring badges are only issued to learners that meet the module completion requirements.
  • Badge consumers gain additional insights regarding the achievement from the evidence submitted by the learner.
Submitting Evidence for Open Badges in Canvas image

Evidence submission is now being used to enhance the value of badges in Canvas Courses.

“I think this is an important advance for the Badgr app and Open Badges in general. Getting evidence URLs into badges has always been a big hassle for generating “evidence-rich” badges. By making it possible to create Open Badges where the earners are responsible for inserting the evidence URL into the badge solves that problem in an elegant way. Plus, the research on motivation suggests that that having earners take charge of inserting the evidence will enhance their sense of ownership and pride, and make it more likely that they share their badges out of social networks.“

Daniel Hickey, Professor and Coordinator, Learning Sciences Program, Indiana University

Nate Otto, Director of Open Badges at Concentric Sky said, “Evidence is an essential piece of many badge system designs, and we’re happy to offer this first step to supporting the Canvas community in using evidence to award the most meaningful badges possible in courses across the spectrum.”

The development of this feature was supported in part by the TAPD program of the California K-12 High Speed Network. Through the generosity of the TAPD program and Concentric Sky, these new features are being made available to all users of Badgr in Canvas.

Click here to learn more about this new feature.

Contact us by phone or email to learn more about Badgr. Enroll in our free Canvas course to learn firsthand how you can effortlessly integrate Badgr into your Canvas courses.

Nate Otto // February 7, 2017

Developer’s Guide to Issuing Open Badges 2.0

Cute kid with Trophy.

Open Badges are verifiable records of achievement that are published as JSON-LD records using the core Assertion, BadgeClass, and Profile classes of the Open Badges Vocabulary. The Open Badges Specification provides instructions on how these records may be published, transmitted, and verified by applications serving different roles in the Open Badges Ecosystem, such as Issuer, Backpack, Verifier, Displayer, or Consumer.

This post will outline some of the new features and what developers need to know about implementation. See the existing developers’ guide which is up-to-date for Open Badges 1.1. This post won’t go into as much detail and will focus on what is new or different about 2.0.

Status of 2.0

The Badge Alliance, formed in 2014, gathered and refined the use cases that were implemented in 2.0 for over two years, leading to both the release of the 1.1 recommendation in May 2015 and 2.0 in December 2016. At the same time, community members organized to create a long term sustainable future for Open Badges by bringing it before the established web standards community.

As a culmination of this effort, in January 2017, Open Badges was formally accepted into the care of IMS Global, an international member-funded standards organization that operates in the educational technology space, releasing standards such as Learning Tools Interoperability (LTI) and [Common Cartridge](https://goo.gl/ePUeGR. IMS took over from the Badge Alliance, which was the initial home of Open Badges as the Specification made its first steps out from under the wings of the Mozilla Foundation. As of December 31, 2016, the Badge Alliance released its final 2.0 Recommendation and handed the ball over to IMS for finalization. That recommendation is published on openbadgespec.org and has become the “Base Document” that will be adapted by March to become the working draft that IMS will coordinate with its supplier community to implement.

The working draft will feature a few spelling corrections, but likely no material changes from the BA recommendation. After two interoperable issuer implementations and a working validator are finished, IMS will move to finalize 2.0. There is an opportunity for modification to the specification during implementation to address any show-stopper issues that arise, so the existing recommendation should not be assumed to be identical to the final draft to be sent to finalization, but all indicators point to very few changes pending successful implementation.

Let’s talk about how to implement. Open Badges 2.0 is intended to be a very simple upgrade for nearly all issuers, while providing a rich set of new options that enhance badge expressiveness, security, and portability. The new options are almost all optional enhancements from what was available in 1.1. This means that backpack services and verifiers have significant work to do as they upgrade to 2.0, but the benefits they can realize from increased machine-readable metadata and more explicit validity rules make 2.0 an enormous improvement for services that display and make use of Open Badges. I’ll talk about the changes for verification in a future post.

How to Issue Badges

Issuing Open Badges requires constructing and publishing a set of interconnected resources that follow the structure and guidelines set out in the Specification. The properties that make up a badge’s metadata are split across these resources depending on where they apply. Together they form an Open Badge. For each badge awarded, there’s one each of an issuer Profile, a Badgeclass, and an Assertion, linked together. These three components are instances of “Open Badges Vocabulary Data Classes”. Profile, BadgeClass, and Assertion are like recipes for describing certain types of information. The Specification describes the properties available in each one, which are required, and what sort of values are allowed for each one. In some cases, it requires the property value in one class to be an instance of another, which means embedding a JSON “object” (denoted as a list of that class’s properties within another set of { } curly braces inside the first instance). Let’s start with Profile, BadgeClass and Assertion.

  • The issuer Profile describes the individual or organization awarding badges. In 2.0, the Profile class is used to describe both issuers and recipients of badges. For issuers, the Profile must have a canonical published at an HTTP URI which appears as the profile’s id property. New in 2.0 is the inclusion of verification instructions for Assertions awarded by this profile, which allows issuers to be explicit about how they award badges. This also allows verifiers to fall back to enforcing a newly clarified default setting for “hosted” Assertions.
  • The BadgeClass describes the achievement in terms of metadata such as name, description, image, and criteria. It links back to the issuer by its id or by embedding the issuer Profile record. The BadgeClass must also have an id that takes the form of an Internationalized Resource Identifier (IRI), but in 2.0, issuers may use an ephemeral id and are no longer required to have an HTTP identifier and associated hosted record but can use other identifiers. The urn:uuid scheme is recommended for cases where there is no HTTP-hosted record. Criteria may now be either a URL, as in previous versions, or an embedded instance of the new Criteria class, in order to relieve issuers of the requirement to host yet another record and to allow badges to be more robustly machine readable.
  • The Assertion is the record of an individual’s achievement of the badge. It links to the BadgeClass by id or an embedded BadgeClass record. Assertions have always identified their recipients by a string-based property, but 2.0 is more explicit about how Assertions identify those recipients, opening up the possibility for badges to be awarded to other issuers, to organizations identified by url, to individuals identified by a social media profile url or telephone number as well as the most common method, email. Assertions contain brief verification instructions to indicate which verification procedure should be followed from those instructions, such as specifying that the id of the Assertion is an HTTP identifier where a canonical copy may be retrieved, or that the Assertion will be delivered as a record cryptographically signed with a specific one of the issuer’s defined keypairs. New metadata is also available for Assertions in the form of the option to include Evidence records and narratives that describe the process of meeting criteria, potentially connecting multiple evidence records and alignments.

There are a number of other complementary data classes that may be mixed in. I mentioned Criteria and Evidence, but several other new and updated classes are available as well. Those include an updated AlignmentObject and RevocationList as well as new classes CryptographicKey and Image.

A simple example

Here is a hypothetical example Open Badge that shows some of the new features in action.

Issuer: An issuer establishes a Profile, by hosting the following JSON on their website available at https://example.org/organization.json:

  "@context": "https://w3id.org/openbadges/v2",
  "type": "Profile",
  "id": "https://example.org/organization.json",
  "name": "An Example Badge Issuer",
  "image": "https://example.org/logo.png",
  "url": "https://example.org",
  "email": "contact@example.org",
  "verification": {
    "allowedOrigins": ["example.org", "two.example.org"],
    "verificationProperty": "id"

This is almost identical to what the issuer would have published under 1.1. They have updated the context URI and are using the new Profile type declaration. The previously used Issuer may also be used as a type. It is now considered a subclass of the more general Profile. In addition, the issuer has chosen to specify a verification policy, in which they make one change from the default behavior to also authorize assertions on a second origin under their control, two.example.org.

BadgeClass: The next step is to define a BadgeClass that links to this Profile. The Issuer chooses to make this record available at https://example.org/robotics-badge.json:

  "@context": "https://w3id.org/openbadges/v2",
  "type": "BadgeClass",
  "id": "https://example.org/robotics-badge.json",
  "name": "Awesome Robotics Badge",
  "description": "For doing awesome things with robots that people think is pretty great.",
  "image": {
    "id": "https://example.org/robot-badge.png",
    "caption": "A pretty badge, with many happy trees.",
    "author": "http://example.org/Bob_Ross_profile.json"
  "criteria": {
    "id": "https://example.org/robotics-badge.html",
    "narrative": "To earn the **Awesome Robotics Badge**, students must construct a basic robot.\n\nThe robot must be able to:\n\n  * Move forward and backwards.\n * Pick up a bucket by its handle."
  "issuer": "https://example.org/organization.json",
  "alignment": [
      "targetName": "CCSS.ELA-Literacy.RST.11-12.3",
      "targetUrl": "http://www.corestandards.org/ELA-Literacy/RST/11-12/3",
      "targetDescription": "Follow precisely a complex multistep procedure when carrying out experiments, taking measurements, or performing technical tasks; analyze the specific results based on explanations in the text.",
      "targetCode": "CCSS.ELA-Literacy.RST.11-12.3"

There are several 2.0 implementation options showcased here:

  • The issuer is identified by URL. Another option would have been to embed the issuer profile JSON above almost exactly as it is independently published. This allows for better portability in some cases, where the trustworthiness of the BadgeClass is well-established.
  • The author has chosen to provide additional metadata about the badge image by embedding an instance of the Image class instead of just providing the image URI like "image": "https://example.org/robot-badge.png". This allows displayers to use the included caption to render alt text to improve accessibility.
  • The issuer has chosen to embed criteria information and include a link to an external URL. At least one option is required, though if issuers only wanted to link to a URL, they could go directly by using the 1.1 format "criteria": "https://example.org/robotics-badge.html". Embedding a criteria narrative allows display platforms to show information about how to earn the badge directly to viewers in their application instead of only being able to render a link.
  • The updated term names for alignment are used. Previously, alignment would use name, description, and url. The new property named targetCode identifies a code string within the target framework and URL if the tergetUrl is not specific enough.

Assertion: The final step to a complete Open Badge is to publish an Assertion. This issuer uses hosted verification, so the Assertion is published at one of the allowed origins declared in the issuer Profile, yielding a canonical URI like https://example.org/beths-robotics-badge.json:

  "@context": "https://w3id.org/openbadges/v2",
  "type": "Assertion",
  "id": "https://example.org/beths-robotics-badge.json",
  "recipient": {
    "type": "email",
    "hashed": true,
    "salt": "deadsea",
    "identity": "sha256$b6877bb4361da770ac2c9fa2f28136de8dc93d823bbc1de438d5987e3f4032ef"
  "image": "https://example.org/beths-robot-badge.png",
  "evidence": "https://example.org/beths-robot-work.html",
  "issuedOn": "2016-12-31T23:59:59Z",
  "expires": "2017-06-30T23:59:59Z",
  "badge": "https://example.org/robotics-badge.json",
  "verification": {
    "type": "hosted"
  • The issuer has not chosen to implement the Evidence class to provide machine-readable descriptions of evidence in the Assertion itself. See an example that does.
  • The recipient’s email address beth@example.org is salted and hashed. The declared salt value deadsea is appended to the recipient email address. Hashing the string beth@example.orgdeadsea with the SHA-256 algorithm yields b6877bb4361da770ac2c9fa2f28136de8dc93d823bbc1de438d5987e3f4032ef. The sha256$ algorithm identifier is tacked on the beginning to form the identity reference in the Assertion.
  • 2.0 is more explicit about what types of identifiers may be used, but in this case, email is still the property used to identify the recipient. If a system trusted that the following profile was a match for a user account on their system, this Assertion could be trusted to have been awarded to that profile.
  • An expiration date is optionally included. Both issuedOn and expires use the ISO-8601 timestamp format and include a time zone designator (Z for UTC). Several other timestamp formats were previously accepted.

An example recipient profile matching the above badge would look like this:

  "@context": "https://w3id.org/openbadges/v2",
  "type": "Profile",
  "email": ["beth@example.org", "bethpersonal@example.com"],
  "url": "https://twitter.com/exampleBeth"

Unlike the above examples, this Profile does not have an id and may never be published for public consumption. It is an example of how an application can model a user on its system based on the properties it trusts to serve as recipient identifiers for that profile. In addition to viewing badges issued to either of the listed email addresses as congruent, this profile may also be connected to badges awarded to this user’s Twitter handle where the profile URI was identified and the url IdentityObejct type was used.

Next: Verification

As you can see, the new options are easy for issuing applications to implement if they are starting from a compliant 1.1 system. There are more significant upgrades required for verification workflows, but also more reliability, once we have solidified some of the key components of the verification ecosystem. Stay tuned for an upcoming post that goes into more detail on these topics.

Badgr Team // February 5, 2017

FOCUS Community Badges for Engagement and Learning

A collection of badges defined by FOCUS learning.

Open Badges can enhance an assortment of competency recognition, professional development, and can reinforce values. The use of badges is broadly understood to recognize certification, training or other competencies. But, badges are flexible and can recognize any achievement important to a community, which includes types of engagement that have not always been understood as high-value credentials.

Badges not only recognize formal learning, but can also help to describe a person’s interests or passion about a given subject and, when combined, they can describe a level of commitment that a single point in time high-value badge may not. These “community badges” can make up an important part of a learner’s overall portfolio.

Because of their visual medium and ability to be displayed on profiles, Open Badges offer an opportunity to get playful with how members of a community can engage with learning content and tokens of recognition.

One great example of fun badges deployed within a formal learning environment comes from FOCUS, who began using Open Badges by adding the Badgr app to their Canvas courses in April 2016. This organization is a large non-profit, campus outreach that encourages college students to grow their relationship with the Catholic faith. The badge program was implemented to support engagement and reinforce values, without the need for testing.

With a relatively small number of classes, FOCUS has awarded over 2111 badges in less than one year.

Since most of their employees are fresh out of college, they wanted to limit the amount of assessment they do, while encouraging self-reporting that didn’t feel like testing. Open Badges were the means for them to collect the data they need.

“On the back end, Badgr lets the Formation and Training team see where the organization is as a whole; we can find out what’s popular, where people are behind, and which teams or groups are most engaged. It helps us see whether our training is really working or not!” -Kerry Floyd, FOCUS

The people at FOCUS created some fun badges to motivate and gamify employee learning activities. Many of their badge award activities follow along with Catholic faith activities, such as learning bible stories and reading. We’ve included a few fun examples from FOCUS’ badging program.

FOCUS Community Badges for Engagement and Learning image
FOCUS Community Badges for Engagement and Learning image
FOCUS Community Badges for Engagement and Learning image

Tell us your badging story! How have you implemented your badging program? Does your badging program serve to motivate, measure or track progress? How has your program changed now that you use badges?

Nate Otto // January 18, 2017

Open Badges 2.0 Profiles and Recipient Identifiers Deep Dive

Two students sitting at the computer lab

One of the most intricate sets of use cases defined in the Open Badges 2.0 discovery process was around the definition of an Issuer and a Recipient, in order to improve how badges map to how individuals and organizations construct their identity online and offline. In many ways, email addresses serve as great identifiers. However, people have multiple email addresses. Sometimes they move on from a school or a job, losing access to an address in the process. How can Open Badges allow people to operate within these realities?

Last week, I wrote about how Open Badges 2.0 introduces flexible recipient identifiers and Profiles, and this week, I will focus more directly on the specific motivations and approaches that were accepted into the 2.0 Recommendation. This topic covers a sizable portion of the capabilities proposed for 2.0, where some of the others, like the ability to embed evidence metadata in Assertions are far simpler.

Award badges to and from a Profile (#77)

The first set of capabilities on this topic deals directly with being able to award badges to multiple attributes that identify an entity and to be able to use issuer Profiles to be able to receive badges as well as award them.

  1. As an organization or an individual, I want to define and be identified by a public hosted Profile that includes any number of attributes and to have badges issued to that profile… (#77-1)
  2. As an organization or individual, I want to issue badges to a recipient’s profile that can contain multiple identifiers… (#77-2)
  3. As an organization or an individual, I want to issue badges from my public profile… (#77-3)
  4. As an issuer, I want to define a profile based on any number of attributes for a recipient who has not yet created their own profile on my system… (#77-4)
  5. As an issuer, I would like to award a badge to a recipient based on hashed (obscured) versions of attributes that identify them so that inspectors are not easily able to discover the recipient’s identity, while enabling the badge to be reconciled with public versions of the recipient’s profile. (#77-5)

How did we serve these capabilities in the Open Badges 2.0 Recommendation? The solution starts with the new Profile class. The previously used Issuer class is now considered a subclass of the more general Profile that can be used to represent any actor that uses Open Badges, either as issuer or recipient.

Defining a Profile: In order to be used for issuing, this must be publicly available at a consistent URI (#77-1), though ephemeral instances or Profiles may be generated dynamically within applications to represent users and entities(#77-4). Case #77-3 merely reaffirms the issuer property of the BadgeClass that identifies an issuer Profile as the creator of that badge and issuer of its Assertions with no change from v1.1.

Awarding Badges: Badges are awarded to a particular string identifier (one of the so-called Profile Identifier Properties) that is one of the attributes of a profile (#77-5). Different issuers may know an individual by different email addresses, but if they are both understood to be part of a profile, inspectors can understand badges awarded to each address as belonging to the same individual. Alternately, as requested in #77-2, an issuer may award an Assertion directly to a profile by identifying it’s “id” property as the target.

If the following two emails were trusted to correspond to this profile by a consumer, they could understand badges awarded to either one of them, or to the profile id itself as belonging to this published profile.

  "context": "https://w3id.org/openbadges/v2",
  "id": "https://example.org/profile/1",
  "type": "Profile",
  "name": "Steve Exampleton",
  "url": "http://example.org",
  "email": ["old_address@example.com", "new_address@example.org"]

Verifying a Profile is Accurate

That brings us to the question of how consumers decide to trust the contents of a profile. Who believes that both these addresses belong to Steve Exampleton, as the above profile claims? This requirement is expressed in issue #79.

  1. As a badge consumer, earner, or issuer, I want to verify that a profile truly corresponds to its stated entity so that I can trust badge claims related to that profile (#79).

There is no global trust authority in the Open Badges ecosystem, though there may be many issuers that you trust. This is a critical difference between badges and some other systems, like DNS, which is based on a broadly shared consensus on the trustworthiness of certain root certificates implemented in the browsers on your machine. Open Badges will only at best have super-providers of trustworthiness organization, though before 2.0, there was no canonical way to communicate information about whom was trusted by who. 2.0 introduces the concept of Endorsement, based on some key vocabulary development work done by contributors to the W3C Verifiable Claims Task Force and Credentials Community Group.

I’ll delve into Endorsement in a different post with specific code examples, but let me briefly point to several more capabilities defined for 2.0 and how the combination of flexible Profiles and Profile Identification Properties with Endorsements can serve each case.

Update the identity of a recipient (#88):

  1. As a recipient of a badge, I want to request that issuers update the identity information on my badge assertions so that I can prove ownership of my badges in perpetuity (#88-1).
  2. As a recipient of a badge, I want to provide evidence of ownership of my badges in perpetuity even if my identity information originally referenced in those badges becomes deprecated (#88-2).
  3. As a badge issuer, I want to update the recipient profile of a previously issued Assertion, so that recipients of my badges can easily use these badges, even if they change email addresses or lose the ability to verify a previously used identifier (#88-3).

The first (#88-1) can be easily served for hosted-verification Assertions by updating recipient information in that hosted representation to point to the new identifier and for signed-verification Assertions by reissuing. However, endorsement opens alternate pathways as well. For example, if the issuer of an Assertion whose recipient identifier is "type": "email", "identity": "old_address@example.com" endorses a profile with that address and "new_address@example.org", an observer could conclude that a user who could prove their ownership of the new address could also be trusted to correspond to the old address, at least for the issuer in question. This approach would serve #88-3 for any consumers or platforms who could understand my endorsement.

The applicability of endorsement to the second transferability story (#88-2) is a little more direct. If a recipient has an endorsement you trust that verifies multiple identifier properties about them, you can trust badges that are awarded to any of those properties are theirs even if they can no longer, for instance, prove that they have access to a particular old email address.

Link profiles across multiple platforms (#83)

Another category of capabilities touching on profiles, identifiers, and endorsement is a request to.

  1. As an issuer, I want to have all of my issued badges reference a single issuer profile so that I may use multiple badge-issuing applications or switch between platforms (#83-1).
  2. As a consumer or recipient, I want to understand when multiple Assertions are awarded by the same organization or individual, even if that issuer uses multiple platforms (#83-2).

In order to serve the first story without introducing large amounts of new complexity, the verification property now available in Profile allows issuers to set a scope for where hosted assertions should be trusted that may go across origins, by setting a startsWith or allowedOrigins property for hosted Assertions. For signed Assertions, multiple CryptographicKeys may be associated with an issuer Profile as publicKeys, supporting this use case. While I expect these features to be implemented in verification applications at the coming coordinated launch of 2.0 supporting issuers, backpacks, and verifiers, it may be that this feature takes some time to implement in issuers due to a significant increase in complexity.

With this post, we explored part of the Open Badges 2.0 Recommendation drafting process, how capabilities that had been refined and proposed as part of this effort turned into specific changes to the Specification and its accompanying examples.

Nate Otto // January 11, 2017

Awarding Open Badges with 2.0: Recipient Identifiers

Header to use on spec update blog posts

The v2.0 Open Badges Recommendation was published by the Badge Alliance on December 31. You can check it out on openbadgespec.org. It has a bunch of new features, including:

  • Improved Linked Data / JSON-LD support for increased portability and compatibility with other standards
  • Embedded evidence and criteria
  • More flexible recipient identifiers
  • Endorsement
  • New image metadata for accessibility
  • Internationalization and multi-lingual badges
  • Improved alignment to external frameworks and objectives
  • Security improvements

When I have little blocks of time over the next month, I’ll be highlighting key improvements in a series of blog posts. Any of the above topics would make for a great discussion, but let’s first turn our attention to how Open Badges 2.0 improves the flexibility and power of recipient identifiers in Assertions.

Flexible recipient identifiers in Open Badges 2.0

Open Badges v1.1 nominally supported methods for awarding badges to identifiers other than email, but it was not explicit about how that would be done. The 2.0 recommendation solidifies the method and provides a clearer description of how to identify a recipient by an identifier other than an email address as well as how backpacks and consumers are expected to validate these identifiers.

You would use a block like this in the Assertion to award a badge to an entity identified by url:

  "@context": "https://w3id.org/openbadges/v2",
  "type": "Assertion",
  "recipient": {
    "type": "url",
    "hashed": false,
    "identity": "https://exampleuniversity.edu"

That would correspond to the following issuer Profile, because the “url” property matches.

  "@context": "https://w3id.org/openbadges/v2",
  "id": "https://badgeplatform.net/issuers/123",
  "type": "Profile",
  "name": "Example University",
  "email": "info@exampleuniversity.edu",
  "url": "https://exampleuniversity.edu"

2.0 does not require recipients to publish Profiles, as they can be identified in Assertions by their constituent properties, like their email address. However, I think many backpack platforms (or more generally, any platforms that have user accounts for badge recipients) will take advantage of the more flexible Profile class and indeed publish profiles for their recipients. This could allow recipients to do things like endorse and issue badges themselves. Perhaps more importantly, the Profile class is a standard method to publish a description of an actor in the Open Badges ecosystem (issuer or recipient) with multiple properties, instead of only understanding users as a single email address identifier.

Awarding a badge to a Profile directly

It is also possible with 2.0 to issue directly to a Profile, not by identifying one of its string-identifier properties (email, url, telephone…), but by awarding to the “id” . The “id”, an alias for the JSON-LD keyword “@id”, is the canonical IRI/URI that identifies a particular published instance of an entity’s profile.

  "@context": "https://w3id.org/openbadges/v2",
  "type": "Assertion",
  "recipient": {
    "type": "id",
    "hashed": false,
    "identity": "https://badgeplatform.net/issuers/123"

We’ll see if implementers prefer to be explicit like this or not. I expect them to continue to use properties most of the time.

The advantage of identifying a recipient entity via an identifier property is that the recipient can have accounts on multiple systems that can be connected if viewers trust each profile. A badge could be understood to be received by either profile that is trusted to contain the recipient identifier property.

The advantage of identifying via profile id is that the issuer may be very explicit what should be regarded as the recipient entity. Though, this may be slightly vulnerable to manipulation, if the entity hosting the profile changes its value after the badge is awarded but before consumers process it.

Why this matters

The flexibility to award badges to recipient identifiers other than an email address, and to understand entities as having potentially multiple email addresses and other types of identifiers, allows us to better map the badges ecosystem to how the world works for badge users. People and organizations are known in their different circles by different identifiers, but at times, they want to present their credentials together. 

Open Badges 2.0 recipient identifiers allow you to recognize the people and organizations in your life. You can identify them with their:

  • email address
  • telephone number
  • url — a homepage or a social media profile like “https://twitter.com/ottonomy”.
  • profile id

And then anyone who understands the recipient’s profile as a collection of verified properties that describe them can understand the badges they have earned from many issuers who know them by their various names. 

The foundation for these possibilities is written into the 2.0 Recommendation, but their success depends on great implementations for issuing, validation, and backpack workflows created by the companies and developers investing in the Open Badges ecosystem.

Nate Otto // January 5, 2017

Open Pathways Connect Badges to What You Care About

pathway.png image

Open Badges are digital tokens of accomplishment used to recognize many types of achievements around the world. By some estimates, there have been over 10 million Open Badges awarded by thousands of issuers across hundreds of platforms and software.

Today, each Open Badge has a description and a link to detailed criteria that describes what it takes for a recipient to earn the badge. Each awarded badge (the so-called Assertion) has the ability to link to evidence of what its recipient did to meet that criteria. As expressive as this information can be, and even as powerful as this will become when issuers and display platforms incorporate support for the embedded criteria and evidence now part of the Open Badges Specification v2.0, this metadata is not proof that a particular badge has any currency in the real world. How can earners and consumers understand which credentials are good representations of the skills and experiences that are valuable to their community of practice or discipline?

Open Pathways bridge that gap and give practitioners the ability to publish a map of the learning landscape in their community that links together badges from many providers. This is the layer of “glue” necessary to understand the connection between specific badges and the skills needed by employers.

Concentric Sky and several partners are leading the way to define a simple data vocabulary that can describe competencies, experiences, program requirements, or any other landmark on a learning map that is an important element of developing or understanding expertise in a field. Once an Open Pathway is defined, Open Badges can be markers of specific experience relative to that pathway.

Badgr launched its Pathways features in October 2016 after months of testing and development with a set of beta partners. This release, which allowed Open Badges issuers to define Pathways, is the first stage in a plan to advance the idea of an open system for defining and using Open Pathways across the distributed Open Badges ecosystem.

Open Pathways map the learning landscape

A learning pathway is an organized set of educational goals shared in a community. It is the connection between specific digital credentials and a community’s understanding of what people have accomplished, in terms of requirements, competencies, or other “real-world” objectives. While we are starting with the concept implemented in Badgr, the goal is to allow publishing and consumption of learning pathways distributed across multiple services much like Open Badges themselves function.

The goal of Open Pathways is to create a lightweight conceptual model that can link various specialized competency frameworks, degree maps, job skill profiles and more. This model can be published as a reusable vocabulary, a method of standardizing data formats across stakeholders, that is compatible with a number of other representations of more specific concepts like a Competency.

Pathway publishers define Pathways in order to organize a set of objectives into a comprehensible structure and to link to badge definitions that are a known fit for specific objectives, whether or not the Pathway publisher is the Issuer of those badges.

Open Pathways Connect Badges to What You Care About image

Case Study: Pacific Science Center Discovery Corps

The Pacific Science Center was one of the beta partners Concentric Sky worked with to test Badgr’s Open Pathways features in 2016 as part of a collaboration with the University of Washington Information School to add digital badges to recognize student achievements and progress in the Discovery Corps youth development program.

Discovery Corps is a paid internship opportunity for high-school age youth who gain science and job skills as they work on the front lines of the Science Center’s operations. Youth in the program come from all around the Seattle area and from a range of diverse backgrounds. Corps members serve in many of the day to day operational roles in the museum and are a key part of its mission to ignite science-based curiosity in its young visitors.

The program’s administrators worked with youth and a team led by Katie Davis of the UW iSchool to translate the achievements and job roles that made up the program into Open Badges aligned to a “Career Ladder” pathway and complementary Science and Soft Skills pathway. Students start at the Discovery Corps Assistant level, doing what is for many of them their first public-facing job with basic responsibilities like greeting visitors to exhibits. They move up through three more levels of increasing responsibility, gaining Open Badges as tokens of authorization that they have checked off on the requirements for each role. As students gain more experience and science knowledge, they not only gain the ability to take on more responsibility in the museum, they get badges that recognize their science knowledge and career-ready soft skills.

Concentric Sky built a custom web application for the Discovery Corps that lets students and program administrators track progress through the program. Badges are awarded from within the free Badgr.io service, and students use the Discovery Core site to view and share their progress. Pathways serve several roles for the Discovery Corps:

  • Wayfinding: Students can see what they’ve done and what they need to do next. They can see which badges correspond to each level of achievement and see their choices for how to complete each level and get authorized to serve at the exhibits they are most interested in. Program staff can view a student’s progress and advise them on how to proceed.

  • Understanding Progress: Students and staff can see the progress of other students in the program. This allows students to see what specializations their peers have chosen and staff to plan for scheduling and to optimize training opportunities by scheduling them for the right students at the right times.

  • Sharing Achievements: In 2016, 100% of the students who graduated from the program applied to college. Open Pathways give students an ability to share their achievements, not as a flat list of badge awards that an admissions officer would be unlikely to understand, but as a view of their progress embedded in the story of their progression through the four levels of the career ladder and three levels of each customer service skill.

Open Pathways Connect Badges to What You Care About image

The progress tracking and sharing features that Concentric Sky prototyped for the Pacific Science Center web app are coming to Badgr in early 2017 to enhance the existing Pathway authorship tools that are already in place.

Toward a better badge currency

Our work the Pacific Science Center shows one way to use learning pathways – to organize objectives within a defined learning program where all badges are awarded by the program – but there are many other possibilities.

For example, Pathway authors can set-up automated meta-badges awarded upon completion of specific sets of other requirements based on badges they might earn from other organizations. Concentric Sky might award its internal Database Skills badge based on completion of either an internal training course or 2 specific badges from an external training provider. Badges from any Open Badges compliant system can be added to a pathway.

The data for how one organization includes the badges issued by another into its pathways is a valuable signal to understand the currency of a badge. With Open Pathways, we are moving toward a future where we can better understand people’s varied learning journeys and the deep expertise that experts develop in their disciplines through these journeys. Good strong data about how badges are being used will combine with the expressive metadata that can be embedded in Open Badges to break open the black boxes of today’s degrees and certificates and help learners and employers navigate through a rapidly growing map of learning opportunities.

See more about how to use Pathways in Badgr today or contact us to get involved in helping to advance Open Pathways.

Cale Bruckner // July 29, 2016

HigherEd.org Competency-Based Education Portal Launches

Image for HigherEd.org article.

Concentric Sky partnered with Lord Fairfax Community College (LFCC), The American Health Information Management Association (AHIMA) Foundation, the International Association of Administrative Professionals (IAAP), Microsoft, Amazon Education, and other organizations to develop HigherEd.org - a free competency-based education resource portal. Through the portal, individuals can work towards nationally-recognized, occupational credentials. Currently, the portal focuses on career pathways in Health Information Management, Administrative Systems Technology, Networking, Cybersecurity, and Information Systems Technology.

Read more about the launch of the portal on the Lord Fairfax Community College website.

Canvas Canvas // October 6, 2015

Instructure Partners with Concentric Sky’s Badgr to Offer Digital Badge Compatibility within Canvas

Instructure partners with badgr image

Instructure, a software-as-a-service (SaaS) company and creator of the Canvas learning management system (LMS), today announced a premier partnership with Concentric Sky, an Oregon-based software developer and creator of Badgr, an open source tool for issuing and managing digital badges. This partnership will allow institutions to create open badges that are compatible within the Canvas environment.

“We hear increasing demand for digital badges from our users and Badgr offers an easy-to-use interface that aligns seamlessly with Canvas,” said Melissa Loble, vice president of partners and programs at Instructure. “It’s difficult for schools to create badges that are portable and have all of the necessary data embedded. This partnership with Concentric Sky allows us to offer a powerful and dynamic open source option that makes it easy for institutions to adapt badging to match their needs.”

Digital badges are used as online representations of skills and achievements. Open Badges takes that concept one step further, allowing for verification of those skills and achievements through credible organizations. Because the system is based on an open source standard, Open Badges is fully portable. This allows badge earners to combine badges from multiple different sources to tell a more complete story of their achievements – both online and off.

“I believe portable digital micro-credentials will play a vital role in the future of education,” said Wayne Skipper, CEO of Concentric Sky. “Badgr is designed to make Open Badges easily accessible for both educators and students, and Instructure’s keen focus on learning outcomes makes them a natural partner for us.”

Open Badges started as a collaborative project between the MacArthur Foundation, HASTAC and Mozilla and has continued to grow through an open, highly collaborative community led by the Badge Alliance.

“The badges community is growing quickly,” said Nate Otto, director of the Badge Alliance. “Partnerships like this help immensely by getting Open Badges into the hands of learners, to help them track their progress and success over time.”

Canvas has consistently emphasized the importance of open standards and interoperability as central elements of community-driven innovation. The integration of Badgr will allow Canvas users to explore new types of pedagogical models based on Open Badges.

In addition to being fully committed to supporting institutions in adhering to FERPA, Canvas is a signatory of the SIIA Student Privacy Pledge, which details Instructure’s commitment to protecting users’ personally identifiable data.

ABOUT INSTRUCTURE Instructure, Inc. is the software-as-a-service (SaaS) technology company that makes software that makes people smarter. With a vision to help maximize the potential of people through technology, Instructure created Canvas and Bridge to enable organizations everywhere to easily develop, deliver and manage engaging face-to-face and online learning experiences. To date, Instructure has connected millions of teachers and learners at more than 1,400 educational institutions and corporations throughout the world. Learn more about Canvas for higher ed and K-12, and Bridge for the corporate market at www.Instructure.com.

View original article

Nate Otto // June 8, 2015

Digital Badges on the Open edX Platform

Badgr An open-source badge issuing, management, and user achievement tracking platform.

EdX and Concentric Sky have collaborated to incorporate digital badging into the Open edX platform. Following an integration of the Badgr software into a badging MVP on the open edX platform, students will be able to earn badges upon completing a course and share these badges on Mozilla Backpack.

At Concentric Sky, we’re proud to be part of a growing ecosystem around Open Badges. To support the community, we’ve developed Badgr - an open source platform for issuing and managing Open Badges. And we couldn’t ask for a better launch partner for Badgr than edX. Open Badges are visual symbols of students’ accomplishments that they can take with them to display all over the web alongside badges from their other experiences.

When the Open Badges feature is activated, Open edX communicates with Badgr to create and store badge records for each student who completes courses. Open edX administrators can either configure an instance of our open source Badgr Server package or use our free hosted Badgr platform. Every badge issued through Badgr is compatible with the latest version of the Open Badges Open Badges specification, which was created by the Mozilla Foundation to help people connect their learning achievements from all different spheres of their experience. Using the open specification means the badges issued from within Badgr may be moved to or displayed within any other application that understands Open Badges. Users who have earned Open Badges anywhere else on the web can import them into Badgr and build a unified collection of their accomplishments, no matter where they were earned.

Students can store their digital badges, and then present them together with badges earned in other experiences. Open Badges come in the form of an image file they could save on their hard drives or in the cloud. Various cloud platforms, including Badgr for web and mobile, the Mozilla Backpack, and Open Badge Passport are designed to understand metadata “baked” into badge images and verify the authenticity of those badges, so that learners can reliably use these credentials when applying for jobs or demonstrating their competence.

On its part, edX is proud to be collaborating with the Open Badging community and Concentric Sky in particular to herald a fundamental change in the way society recognizes, assesses, motivates and evaluates learning. Digital badges will be an important part of digital credentials on the edX platform. After the completion of this MVP, edX will continue to work towards becoming an issuer of badges for course completion and other incremental achievements for edX courses on edx.org. There are plans to instrument the edX platform to generate badging events for student achievements and do extensive data collection around edX badge usage.

Together, edX and Concentric Sky see some exciting possibilities ahead involving awarding badges for smaller achievements within a course, representing skills and experience gained, and connecting badges in learning pathways that travel through multiple courses.

Cale Bruckner // March 23, 2015

Advocating Technical Innovation

The capitol building in Washington DC

I’m in Washington, D.C. this week to meet with elected officials and regulators about issues affecting the tech industry and our economy. 

As part of ACT | The App Association’s annual fly-in, I’m joining more than 50 small tech companies from across the country to advocate for an environment that encourages technical innovation and inspires economic growth.

Our message is simple. Small companies like Concentric Sky are creating solutions that are improving lives, creating jobs, and fueling our economy.

But, policymakers in Washington must understand issues threatening small tech companies to ensure growth continues. The concerns we will raise this week include data privacy and security, internet governance, intellectual property and patent reform, mobile health regulation, and regulatory obstacles to growth. These are important issues for which the federal government is considering taking action. 

I’m looking forward to sharing my perspective on these important issues with my elected officials and regulators.

Silicon Florist // March 17, 2015

Eugene Based Concentric Sky’s Creation Gains Notoriety Through the International Year of Light

Radiance Dome image

I always love when Oregon startups and tech companies take a more global stage. You know, like that Oregon built lion thing that Katy Perry rode in the Super Bowl. Well okay. Maybe not that. But this. This is cool. Check out how Eugene’s Concentric Sky is celebrating the International Year of Light.

View original article

Josh Clark // March 11, 2015

Designing for Relationship

Designing for relationship

As the digital landscape and physical space we inhabit become more integrated with one another, the role of design becomes increasingly difficult. We live in an age where people are constantly connected to devices and the trend of wearables, beacons, and connected homes will only make digital connections more pervasive. As we move into the constantly connected future, we require new design thinking. Design for utility, emotion, and even connection is no longer enough. We must begin to intentionally design for relationships.

Design thinkers like Donald Norman and Dieter Rams proposed that the major concern of design was a product’s function. For design to be functional it must allow the user to accomplish the set goal the device is purposed for. For example, a toaster that does not toast bread is more of a novelty than an effective kitchen tool. Function, efficiency and utility were, and continue to be some of the most formidable design characteristics.

In the late 2000’s another mantra took the design world by storm. In his book, Designing for Emotion, Aarron Walter advocates for Emotional Design, he says that judging something simply on its functional utility is a flawed baseline. It’s like being a chef and feeling the job is done when the food is edible. As design professionals there should be a higher level of concern than, “does it work?” Digital experiences should not simply be functional, but pleasurable. They should evoke emotional engagement and support patterns triggering positive emotional reaction. The designer does this by assuring that the product is functional, reliable and usable. More than that, designers should strive for delightful experiences.

The problem with emotional design is that more and more people are not just connected with systems, but systems are connecting users with other people. User Experience designers are not just creating software for Human-Computer Interaction (HCI), but Human-Human Interaction. We’re dreaming up digital ways for people to enhance their personal relationships. For better or worse, we live in a world where the physical spaces we inhabit are now digital spaces as well.

The problem is that people are multi-faceted and come with all sorts of relational baggage. Here’s an example. In 2003 I got my first mobile phone. My grandmother, who was a beautiful woman, would leave scathing voicemails for me when I didn’t answer my phone while at work. She figured that I had the obligation to answer my phone if it rang, and since I had my phone on me all the time, I should answer my phone at anytime. We experienced relational disruption because of the change in the digital landscape i.e., phones are no longer tethered to physical locales.

That was a decade ago. Now I live in an age where connections are everywhere. If someone wants to get in contact, they can message on Facebook and that message is sent to my computer, email, phone and tablet all at once. I can’t imagine how this access would have affected my relationship with my grandmother.

We are at a point where we have to re-evaluate our design philosophy. It’s no longer enough to design for emotion. We must design for relationships. That is to say, we must design experiences that help people relate well with one another.

Attempts to broach the topic of Relationship Design have often come under the guises of the term “social.” Conversations about social design, however, can be superficial at best. They center less around human relationships and more around connection, as if creating a pipeline from point A to point B is the same as creating meaningful human experiences. As designers we can do so much more to enhance relationships. We must elevate our craft from providing ways to connect, to facilitating healthy and meaningful relationships between people.

How do we design for relationships? How does the way in which we interact digitally create positive human relationships, like friendships, partnerships, family, co-workers, and even marriages? How does this world of ubiquitous connection make us more human, not less? Pulling from various fields, including social psychology, marriage and family counseling, and user experience, there are several areas we can focus on when designing. While I cannot attempt to establish patterns for each area in this blog post, I will acknowledge them and follow-up with each as a separate blog post in the future.

Create Healthy Boundaries

Major relationship deficiencies begin when the lines between one person and the other become blurred. The clinical term for this is enmeshment. In co-dependent relationships a person will feel like they are losing themselves to the needs and desires of another person. In the digital space, we aid enmeshment by degrading the ability for a person to create personal space, and clarify boundaries for themselves. And it is only getting worse as we become more connected.

Make Connection Management Easier

Let’s be honest: the beeps are killing us. We are pushers of interruptions, many of which are unnecessary, and unhelpful. In addition, we have relationships flying at us from 20 different angles. Email, Facebook, Twitter, Instagram, KiK, Skype, and Snapchat to name just a few. The amount of emotional energy taken from us managing connections impedes our ability to invest in relationships. Relationship design means optimizing connection management.

Support Positive Connection

We’ve all been there: some ignorant thing gets posted on the Internet and people get angry. Relationship Design forces us to focus not simply on viral content, but safe and meaningful relationship. Most arguments on the web are rooted in polarities. Relationship Design focuses on building opportunities for healthy dialogue and disagreement in hope of bringing people towards one another rather than reinforcing their differences.

Looking Forward

There are several other areas to focus on in Relationship Design, including the development of singular self-integration, creating opportunities for shared memory and reducing cognitive dissonance in relationships. In this series I’ll be focusing on digital design patterns that help build stronger relationships for each area of focus above. In the words of Dr. Ian Malcolm from Jurassic Park, “your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” We’ve come to a watershed moment in digital design. We have to stop and ask ourselves, not only if we can build social experiences for our users, but what we should build to support excellent relationships. I look forward to sharing with you the ways we’re building relationships here at Concentric Sky.

Nate Otto // February 3, 2015

Open Badges and Micro-Credentials Technical Roadmap

Nevada Lane, @NevadaSF. This post adapted from a technical session recap, 30 January 2015 in Redwood City, CA

Open Badges and Micro-Credentials Technical Roadmap (Header Image: Nevada Lane, @NevadaSF. This post adapted from a technical session recap, 30 January 2015 in Redwood City, CA)

Last week, representing Concentric Sky and the Oregon Badge Alliance, I was an invited participant at the Educator & Workforce Micro-Credentials Summit, put on by Digital Promise with the support of the MacArthur Foundation and the Carnegie Corporation. Thanks to Digital Promise and MacArthur for extending an invitation and bringing the Oregon Badge Alliance’s perspective to Redwood City. Concentric Sky is working to define a new endorsement extension to the Open Badges specification in order to elevate the best badges and issuers within their communities of earners and consumers.

I was specifically requested to participate in a session on “The Credentials Roadmap: the Technical Side of Micro-Credentials.” In this portion of the summit, we addressed questions around micro-credentials from a technical perspective, but the questions we considered were echoed in every other session I attended, and in many of the informal conversations in small groups and around tables the rest of the day.

We talked about interoperability, about adopting a multi-stakeholder perspective, and about the importance of a long-term view. But there was one question that rightfully occupied most of our time in talking about the technical roadmap, and it was the most frequent topic of the entire summit.

 Open Badges and Micro-Credentials Technical Roadmap image

The big question that will be before us for years is how the value of microcredentials will be determined. (I started talking about this last week, by beginning to investigate the concept of “currency.”) David Blake from Degreed pointed out that when we talk about interoperability today, we are talking on the level of technical compatibility, not on the level of value. While the technical validation of badges and the adoption of the common data specification is absolutely necessary to interoperability, these components alone do not ensure that micro-credentials issued by one organization can be easily valued within a different organization’s context.

The Open Badges specification creates a distributed infrastructure with a low barrier to entry for new issuers, because there are no central gatekeepers whose authorization must be gained in order to participate. With the thousands of issuers that already exist and the potential millions that may join them in coming years, it is a major challenge to compare different micro-credentials. While issuers, earners, and consumers all have a role in determining how badges are valued, “currency” is measured from the value system, context or place within a network of trust occupied by a consumer. The question a consumer might ask is “how does this credential fit into standards I respect, and why should I trust that it lives up to the promise of that alignment?” How can we guide that consumer to an answer without requiring hours or days of effort researching each new credential and its issuing organization? Erin Knight, described this issue as the “most pressing” question in her 2012 paper on badge validation, calling the technical validation measures provided by the Open Badges specification a “baseline,” from which to start addressing the more important questions about value.

This isn’t a problem exclusive to digital or micro-credentials, though it may be a more present problem in our minds because of the diversity of micro-credentials that an open standard allows. Many people are familiar with treating even college degrees as a sort of “black box.” Today, college degrees are well respected as the gold standard of educational credentials, but are impossible for employers to translate into specific skills, understanding or mindsets conferred to their earners. These existing credentials rely on large institutional gatekeepers, and the network of trust they create excludes a diversity of voices and organizations whose learning programs and credentials present value that has not been created within the traditional system.

Within the Open Badges specification community, we often consider badges as visible declarations of trust, and we are working on an endorsement specification to allow recognizers and 3rd parties to add their voices about which micro-credentials speak to their own value systems or those of consumers who trust them. Endorsements aim to help guide earners toward credentials of value and help consumers expand their scope of possible credentials they can recognize, so they can turn those credentials into opportunities granted to earners.

It will be a heavy lift, and there’s no easy path to being able to understand large swaths of the open micro-credentials landscape. But endorsements present an opportunity to define new networks of trust, open to broad participation that can begin to show consumers which badges are trustworthy.

There is the chance that in the face of this hard problem we will recreate existing value systems reliant on large established gatekeepers, because we are unable to translate the value provided by new players into our local contexts. But there is also a chance to build up an emergent ecosystem of understanding micro-credentials issued by a diverse range of providers, layering trust relationships and endorsements. With open technology and cooperative services like BadgeRank.org, we may build up visible records of our trust relationships, and then we might see where many of the micro-credentials created by diverse issuers are situated, each from our own perspectives within a network of trust.

I am taking the lead for Concentric Sky on defining endorsement as an OBI 1.1 extension.

Nate Otto // January 26, 2015

3 Trust Principles for Building Open Badges Software

Badges connection

Open Badges are a technology that promise to serve as portable digital credentials. Each badge symbolizes particular achievements a badge issuer recognizes about a recipient. The goal is that as a “shared language for data about achievements,” Open Badges and the accomplishments they represent can be understood by employers, colleges and other consumers of credentials.

It is badge consumers who are the arbiters of which badges are valuable. in 2015, software that uses Open Badges needs to focus more on helping badge consumers decide which badges make trustworthy claims.

As a developer working with Open Badges, I see a need for badge software to fill this value gap by ensuring that badge consumers can understand what information is being presented in a badge and how it applies to their context. An employer may see that an job applicant has earned a badge for experience with the Python programming language, but there is currently little way for this type of badge consumer to quickly understand how applicable that experience is to the job description she’s hiring for or to see whether the badge is trusted by others in her network. Without making the badge understandable from within consumers’ context, badges have no “currency.”

3 Trust Principles for Building Open Badges Software image

(CC-BY epsos.de)

Currency, as a quality of money, corresponds to whether an artifact is generally accepted. Among credentials in the US, we could say bachelor’s degrees have currency; they are often listed as a top-line requirement for a wide range of positions, and are estimated to become even more important. A Georgetown University study last year predicted that the bachelor’s degree would become a requirement for at least 63% of job openings by 2018.

In the Open Badges community, “currency” has long been a goal. The title of the ongoing MOOC for Open Badges on Blackboard’s Coursesites platform is “Badges: New Currency for Professional Credentials,” and among the working groups of the Badge Alliance, building understanding of badges among employers and other credential consumers has been a key focus.

In the fall, I participated in a roundtable webinar on badges hosted by the collaborative site Working Examples, where we referred to currency as the “holy grail” sought after by badge program designers. I followed up for a more targeted discussion with Krystal Meisel, who worked last summer with the city of Los Angeles on their City of Learning program. We distilled several factors that form both the barriers to how badges could gain currency and the opportunity points that our community, and specifically the developers at CSky, can build software around.

As I wrote for the DPD Project, issuers often try to convey the value they think badges will carry to their potential earner population, only to be met with incredulity or unease. Students are rightly skeptical of educators’ or techies’ claims that a particular credential will open up unspecified but valuable opportunities, and potential badge consumers are unwilling to promise valuable opportunities to earners of unfamiliar badges before seeing what real-world earners of those badges can do. It’s a catch-22 that undermines alternative credentials’ ability to gain currency.

UK research organization Jisc summarized the challenge based on an interview with the Badge Alliance’s Carla Casilli: “It’s clear that for badges to have currency, people need to be confident in their value.” Casilli elaborated on her own blog that badge currency arises from trust networks, and if they are to gain currency, badges “must not only engender trust, but actively work to build it.” She sketched out some features and practices of open badges systems that together build trust.

3 Trust Principles for Building Open Badges Software image

Currency Comes from Trust

A consumer’s ability to trust the claims made by a badge start with verification of its recipient and authentication of its validity. Over time, consumers can consider the reliability of a particular issuer for recognizing earners of a certain quality and can take into account the accreditation or endorsement of external organizations. These factors all add up to trust in the badge as a credible claim about the earner. But Casilli hints at the ephemerality of trust in a credential saying that “Trust is a delicate alchemical reaction based on complex and varying degrees of components, environment, perceptions, etc.”

The goal of open badges supporters isn’t to create an ecosystem of credentials that are trusted tenuously and ephemerally; it has long been argued that open badges have the potential to serve as currency. To build currency with badges, consumers need to know when they can trust a badge’s claims, and potential earners need to know whether the badges they have a chance to earn will be trusted by the employers, colleges, or partners to whom they hope to present them.

3 Trust Principles for Building Badge Software

Open Badges have the potential to unlock value for their earners, in terms of new jobs, collaborations, and opportunities. Here are three tips for software developers looking to turn this potential into cold hard currency.

1. Recognize that consumers and earners may be unfamiliar with Open Badges

Badge issuing programs may provide valuable experiences and have rock-solid assessments, but if the consumers of their badges don’t know how to access the information in badges’ metadata, there is no way for them to decide whether the program is trustworthy. Software for earners needs to help them show their badges in a wide variety of circumstances, often to consumers who may never have seen an Open Badge before. This places a lot of responsibility on badge recipients not only to explain their own accomplishments, but also to explain in high-pressure job application processes what Open Badges themselves are and how to interpret them.

This barrier to developing trust in the badges can be alleviated by embedding information about the features of Open Badges where badges are displayed. Make it clear that an issuer recognized an earner for a specific accomplishment, and plainly display the links to criteria, evidence, and the Issuer. A badge earners’ accomplishments are relevant in many different contexts and conversations, and badge displays should be tailored to the needs of those contexts. For example, a resume is the expected format for discussing credentials in the hiring process. Developers who wish to target job applications as a medium of badge sharing may seek to let earners easily embed badges into their resumes.

2. Consumers must know why they can trust an Open Badge is valid

Issuers, earners, and consumers of Open Badges all have an interest in knowing that a badge presented by its recipient is valid. And when earners show off their badges to consumers who may never have seen badges before, they need to put the ability to perform validation at those consumers’ fingertips. Software developers who write applications representing earners’ interests need to make it easy for earners to put their badges and auditable proof of those badges’ validity in front of consumers. Closely linking software that allows earners to share their accomplishments with software that allows consumers to validate them helps reduce the friction and increase trust.

Make it clear what types of validation an application performs on the badges it displays. A valid badge is one truly issued by the issuer to the recipient that the consumer expects, when that badge assertion has not expired or been revoked.

3. Leverage cooperation to make trust networks visible

The Badge Alliance is in the process of finalizing a specification for “endorsement” of Open Badges and Issuers. Just like the badges given to earners, endorsement badges are shareable declarations of trust. One of the most important questions to answer about whether a badge should be trusted is who else trusts it, and the endorsement specification will make it possible to begin answering this question. BadgeRank.org, a project by Concentric Sky, will utilize public endorsement data as it emerges to serve as a repository for information about the community’s trust in various badges and issuers.

The Open Badges community is cooperative and proactive in defining methods of cooperation. Where it is a heavy lift for one developer or company to build currency for badges, cooperating with a community to establish trust and can distribute the load. With our own proposal to the DML Trust Competition, we introduce a plan for building software that embodies these three principles, and we are happy to see that other initiatives like Badge Europe’s “Open Badge Passport” and the Open Badge Exchange project out of Dartmouth College are also focused on questions around badge currency through looking at building trust.

The Open Badges community will make great progress in 2015 building better software for issuing badges and for earners to manage and organize them. But for those badges to have currency, badge consumers need to have software that represents their interests and helps them decide which badges to trust.

Kurt Mueller // January 22, 2015

Creating Apps for Apple Watch

apod_watch.png image

After years of rumors and hype, Apple announced the Apple Watch to much fanfare in September, 2014. Though it will not be available for purchase until sometime later in the first quarter of 2015, Apple developers can start working on apps for the Watch now, with Xcode 6.2 Beta available from developer.apple.com. I wanted to learn more about developing for Watch and about Swift, Apple’s new programming language, so I wrote a Watch app in Swift to accompany APOD, a popular iOS app we created here at Concentric Sky. (In related news, we launched APOD for Android this week.) Here’s a brief explanation of how I created the Watch app for the APOD iOS app.

Apple Watch Development Basics

First, let’s talk about current development options and restrictions for Apple Watch. As of today, all third-party Watch apps must have a corresponding iPhone app that handles most of the heavy programmatic lifting. You can’t write a Watch app that runs entirely on the Watch hardware, without a communicating iPhone app, though it’s likely that this restriction will be eased over time as the platform matures and developer tools are improved. For now, only Apple can make Watch apps that run without an iPhone and corresponding phone app. 

As explained in more detail by Apple here, Watch apps support three types of interfaces: full-app interactions, glances, and notifications. A full-app interface is required, while glances and notifications are optional. In this article I will discuss creating a full (but simple) interface, and I will address glances and notifications in subsequent articles.

A Watch App for APOD

APOD displays astronomy pictures from the APOD repository, along with titles and descriptions. The iOS APOD app has a gallery view (implemented as a UICollectionView) and a single-image view. Given the small size of the Watch screen, it made the most sense to create a single-image view first, before trying to show multiple images. However, I decided to display the single image as a table with one cell, to facilitate showing multiple images at some later point. I wanted an image and a title label to fill up the entire watch display:

Creating Apps for Apple Watch image

Create Watch Targets

Watch apps are implemented as App Extensions to iOS apps, using the new WatchKit framework available in Xcode 6.2 beta. The first step in creating a Watch app is to make WatchKit Extension and WatchKit App targets in your iOS app, using File / New / Target and selecting the Apple Watch template:

Creating Apps for Apple Watch image

In the options window, I choose Swift as the language for my new target, and I check the boxes for “Include Notification Scene” and “Include Glance Scene.” Checking these boxes causes Xcode to create stub Controller classes for notifications and glances, and add scenes for these to the Watch storyboard it creates.

Creating Apps for Apple Watch image

Now in Project Navigator I see the new targets:

Creating Apps for Apple Watch image

The WatchKit Extension is for the code that runs on the iPhone to support the Watch app, and the WatchKit App target has the storyboard and image assets file for the Watch. You can see that there are no .swift files in the WatchKit App target, which makes sense given that it is not possible for third-party developers to create code that runs directly on Watch at this point. We can define user interfaces for Watch, but the code that controls those interfaces runs on the phone. The full app is controlled by InterfaceController.swift. The NotificationController and GlanceController will be tackled later.

Configuring the Storyboard

Looking at the WatchKit App’s Interface.storyboard, there are four scenes, but I am only concerned with the Interface Controller Scene:

Creating Apps for Apple Watch image

I would like to display a single image with a two-line label under it for the image’s title, and to make the future goal of displaying multiple images easier, I will make a table with a single row. The WatchKit table class is called WKInterfaceTable. There’s a Table object in the Objects library in Interface Builder:

Creating Apps for Apple Watch image

Dragging a Table to the Interface Controller Scene results in:

Creating Apps for Apple Watch image

Within the new Table, there’s a Table Row Controller. This is conceptually similar to a prototype cell in a UITableView or UICollectionView. The Table Row Controller is backed by a custom row controller class that has outlets for each of the UI objects within the table row that you wish to update when displaying the table. In this case, I want an image and a label, with the image above the label. You can see that the table row has a Group item, which is a WKInterfaceGroup. This group will contain the image and the label and determine how they are displayed. To keep things simple, Watch layouts don’t use constraints like iOS storyboards. Instead, a group can either have a horizontal or vertical layout, much like Android’s LinearLayout, and it will display contained items from left to right (for horizontal layouts) or top to bottom (for vertical layouts). I want a vertical layout with the image appearing above the title, so I adjust the group’s Attributes:

Creating Apps for Apple Watch image

I’ve given the group a vertical layout, set Custom Insets to 0 so that the image and label will be flush up against the edges of the display, and set the Size Height to be Relative to Container, with a value of 1. This makes the group take up the entire vertical space of the container, which is its table row.

Next I add an Image object from the Objects library, inside the group:

Creating Apps for Apple Watch image

For the image Size, I set the Width and Height to Relative to Container, with the Width filling the container (value of 1) and the Height taking up 75% of the container height (value of 0.75). This leaves enough room under the image for a two-line label:

Creating Apps for Apple Watch image

The last step in designing the UI is to set the image and label to sensible defaults to indicate that an image is loading, for display before I set the actual APOD image and title in code. I do this by adjusting the Image attribute of the image and the Text attribute of the label (I first add a default image to my WatchKit App’s Images.xcassets file):

Creating Apps for Apple Watch image

Adding a Custom Row Controller

Next I create a Table Row Controller class to provide IBOutlets so I can set the row’s image and label text at runtime. I create a new Swift file called APODRowController.swift:

Creating Apps for Apple Watch image

This file extends NSObject, and has IBOutlets for the image and label defined in the storyboard, above. It also has an apodKey variable to keep track of which APOD is displayed by the row, and has a configureCell() method to pass in the key, title text, and image and set the image and title text in the displayed row.

Now that I have a custom class to back the table row, I must tell the storyboard about the custom class and make the IBOutlet connections from the class to the image and label. These are the Identity, Attributes, and Connections inspectors for the Table Row Controller after I update it:

Creating Apps for Apple Watch image

In the Identity inspector, I set the Custom Class to my newly-created custom class, APODRowController. In the Attributes inspector, I change the name of the row controller identifier to “default,” which will be used later when I configure the row in the interface controller. And in the Connections inspector, you can see the Outlet connections I made from the row controller IBOutlets to the image and label in the storyboard.

Bringing it all Together with InterfaceController

Finally I am ready to flesh out the boilerplate InterfaceController.swift class. If this were a regular iOS UITableView controller, I would implement various methods in the UITableViewControllerDatasource and UITableViewControllerDelegate protocols to configure the number of sections and rows in the table, create each row of the table, etc. However, WatchKit tables are much simpler, and because I am only displaying a single row in my table, simpler yet. The number of rows for a Watch table must be set and each row needs to be configured up front when the table is loaded. If my table had multiple rows I would iterate through them, configuring each one, but since I have just one row I don’t have to loop at all. Here’s the entire class:

Creating Apps for Apple Watch image

The class contains an IBOutlet for the WKInterfaceTable I created in the Watch app storyboard, which I connected in the storyboard from the Interface Controller to the table. It also has a reference to an ApodService class, defined elsewhere and beyond the scope of this blog post, that performs asynchronous loading of today’s APOD. The APODService has a single public function that takes a completion handler:

Creating Apps for Apple Watch image

I am only displaying a single row, backed by an APODRowController object, and I keep a reference to that row called todayCell to enable configuration of it after the asynchronous loading of today’s APOD is complete.

I override the WKInterfaceController superclass function awakeWithContext() to call loadTable(). loadTable() first tells the table that it will have a single row, and that row is of type “default” (recall that I defined my row controller Identifier attribute to be “default” in the storyboard). Then I ask the tableView to give me an APODRowController object and assign it to my todayCell variable. Next, I call the APODService function to load today’s APOD and pass in a completion handler block that configures the row with the resulting key, title, and image. And finally, another call to the tableView’s setNumberOfRows function causes the tableView to redraw, displaying the updated row.

The first time you run a Watch app in the simulator, you must wait for the simulator to launch and then go to Hardware / External Displays and select one of the two Apple Watch displays (38mm or 42mm). Then you will see the Watch simulator appear. This is what I see for my simple APOD display:

Creating Apps for Apple Watch image

Next Steps

This is a very simple example, and only scratches the surface of Watch interfaces and interactivity. Maybe I want to view previous APOD images, or get notifications on my watch when a new APOD image is available. Perhaps I want to share my favorite APOD images with friends through social media or messaging. Wouldn’t it be nice if the APOD watch app knew what I last viewed in the iOS app and could automatically show it to me on the watch? Maybe I want to look through images on the watch and then fling one to my phone to check it out on the bigger display. The possibilities for novel and useful interactions between watch and phone are endless.

We are very excited here at Concentric Sky about wearables and we can’t wait to get our hands on actual Apple Watch hardware in the next couple of months. In the meantime, we are busy exploring the developer tools and adding support to our apps in anticipation of the big launch. Check back for more Apple Watch news, as this is sure to be a hot topic.

Code from this post:

// APODRowController.swift
// Created by Kurt Mueller on 1/18/15.
// Copyright (c) 2015 Concentric Sky, Inc. All rights reserved.
import Foundation
import WatchKit
class APODRowController : NSObject {
  @IBOutlet weak var apodImage : WKInterfaceImage!
  @IBOutlet weak var titleLabel : WKInterfaceLabel!
  var apodKey: String?

  func configureCell(key : String, title: String, image: UIImage) {
    apodKey = key

// InterfaceController.swift
// APOD WatchKit Extension
// Created by Kurt Mueller on 1/18/15.
// Copyright (c) 2015 Concentric Sky, Inc. All rights reserved.
import WatchKit
import Foundation
class InterfaceController: WKInterfaceController {

  @IBOutlet weak var tableView : WKInterfaceTable!

  var apodService = APODService()
  var todayCell : APODRowController! = nil
  override func awakeWithContext(context: AnyObject?) {


  private func loadTable() -> Void {
    tableView.setNumberOfRows(1, withRowType: "default")
    todayCell = tableView.rowControllerAtIndex(0) as APODRowController

    apodService.currentApodInfo { (failed, title, image) in
      self.todayCell.configureCell(self.apodService.currentApodKey(), title: title!, image: image!)
      self.tableView.setNumberOfRows(1, withRowType: "default")

Nate Otto // January 14, 2015

Partnering with Oregon Center for Digital Learning (OCDL) on the Trust Ecosystem Project

ocdl.jpg image

At Concentric Sky, we are proud to serve as the technology partner for the Oregon Center for Digital Learning (OCDL). OCDL is a new non-profit organization founded to support the use of digital badges and other collaborative education technology for learning in Oregon. Together with OCDL, we have applied for a grant through HASTAC & MacArthur’s Digital Media and Learning (DML) Competition (dmlcompetition.net) - which is focused this year on trust in Connected Learning environments.

As a technology to support education, Open Badges have tremendous potential to connect learning across different contexts and to build connections between widespread educational organizations in our communities. As a founding member of the Oregon Badge Alliance, Concentric Sky hopes to further develop the technology that students and the programs they participate in need to bring learning experiences closer together and promote trust.

Along with our partners in the Oregon Badge Alliance, we plan to help jumpstart and support a cross-section of collaborative pilot programs issuing Open Badges. 12 such programs are currently under way, including partners among out-of-school learning organizations, workforce development nonprofits, and higher education institutions. We will also be building a framework for cooperation, though the Oregon Badge Alliance, supporting not only these programs that wish to issue badges, but also the learners who earn them and the representatives of employers, educators, and potential collaborators who want to understand them.

Our proposal for the DML Competition is now open for public voting through January 20. If you support our efforts to create a Trust Ecosystem around Open Badges in Oregon, we ask that you visit the DML Competition site and vote for our proposal. The people’s choice component of the competition could help us win one of three $5000 technology grants that could further support our program.

The Trust Ecosystem Project

The Trust Ecosystem Project will work with 12 pilot badge programs, employers, and Oregon Badge Alliance partners in workforce development, government, K12 and higher education to build software and a framework for connecting learning experiences with Open Badges. The project aims to close the loop between badge issuers, earners and consumers by building software that represents the interests of each stakeholder group. Each application will be released open source as well as hosted for public use. Beyond software, the Trust Ecosystem Project will organize a youth advisory council and will bootstrap a trust network around badges with pilot programs and badge-consumer partners in Oregon, yielding a variety of case studies and potentially exportable implementation models.

Samantha Kalita // January 8, 2015

5 Steps to Know Your Target Users

Couple using devices

Use the 5 W’s to Create an Excellent User Experience recommended applying the writing mnemonic: Who, What, When, Where, and Why to guide user experience design decisions. Let’s take a deeper look at these tools through a five part series. We’ll start with one of the most important considerations in design, “Who is your target?”

Why it’s important

This is a diverse world. People vary in many ways: language, culture, education, beliefs, income, etc. It is impossible to create the perfect user experience for everyone. Don’t try to and don’t worry. Not everyone will be interested in your product or service, and instead focus on your high-value users and consumers. Specialize and optimize for their needs, limitations, and expectations.

Getting started

Follow these five steps to answer any of the 5 W’s:

  1. Review - Understand your product
  2. Research - Evaluate present state
  3. Strategy - Prioritize efforts
  4. Data - Validate with information
  5. Optimize - Iterate on solutions Who is your target? Let’s start at the beginning…


Understand your product

Know your product or service inside and out. Be able to articulate it in one sentence. Know its purpose, benefits and weaknesses. Focus on your core objectives.


Evaluate present state

Analyze the competition.

Who is their target? How are they performing for that target? Have they missed any opportunities? Are there any consumers being underserved? Does that underserved market match your target?

Know your space.

How is your target being addressed in other markets? What works well? What doesn’t work well?

Stay current.

Are there any emerging groups or consumers who would benefit from your product or service?


Prioritize efforts

By this point, you should have good understanding of who your target user is. Now you need to share what you’ve learned with your team and/or client. Often you will have several different types of users. Create user personas for each of them. A user persona is a profile of a fictional person who matches your target. Personas help you visualize and get into the mindset of your consumer. Give your persona a name, age, family, education, hobbies, language, ethnicity, etc. Find a headshot that fits your persona. Add any images or content that will help your team envision each persona as a real user. Make sure to include how they will use your product/service. Identify their expectations and pain points.

5 Steps to Know Your Target Users image

Once you’ve identified your key users, prioritize them. You will design for your primary users while checking that your solutions work for your secondary and tertiary users.


Validate with information

Up to this point you’ve made educated guesses about your target. Now confirm that you’re on the right track by talking with them. Conduct focus groups, run online surveys, join discussion boards, etc. Choose whatever method works best for you. Make sure that you’re gathering first-hand information from your target. Gather quantitative (e.g. “X% of participants preferred Option A”) and qualitative data (e.g. “I like Option A because I can use it while I’m on the go.”)


Iterate on solutions

Repeat the process and re-evaluate with the data you collected. Continue to iterate over the life of your product and service. It’s very likely that you will see new competitors or have potential niche market consumers.

Although it may sound simple, the most important thing you can do is to keep your eyes and ears open to your users. Be proactive. Be aware. Listen to your users. The better you understand your users the better your product/service will be.

What process have you used to identify your target?

Nate Otto // October 21, 2014

Introduction to Open Badges


Hello! Allow me to introduce myself - I’m a new face on this blog, and a new developer at Concentric Sky working on our web applications that deal in Open Badges. For the last year, I have been coordinating a team at Indiana University studying 30 projects that designed and implemented programs to issue open digital badges for learning. The findings from that project are being published now. We found that overall program success often came to the programs that had the best understanding of how all the moving parts of their design fit together, not necessarily those with the most ambitious plans or the best technology.

I’m proud to be joining Concentric Sky, because the team here really understands the potential of Open Badges. With our years of combined experience in EdTech, we’re in an excellent position to help organizations build programs that issue badges - and we can provide much of the software that will help each of their participants earn badges, manage their credentials, and most importantly, use them to unlock future opportunities.

What are Open Badges?

Open Badges are digital images that symbolize particular achievements, benchmarks, or experience. Unlike many of the digital badge systems that have sprung up in videogames and online, Open Badges are a shared language for data about these achievements. They are designed to break down the barriers between different systems that understand only their own sets of familiar credentials.

Open Badges directly embed data about the achievement they represent inside the image. This data stays with the image as it is moved and shared. Using this technology is a way for badge earners to bring together verifiable representation of qualifications, skills, and experience to tell a unified story about their accomplishments, no matter whether those badges were issued by a single education provider or by a wide range of issuers.

The metadata standard was originally designed by a team at the Mozilla Foundation, and now many organizations, including Concentric Sky, are contributing to advancing the standard and growing the ecosystem of organizations and people who can act as badge issuers, earners, and consumers. Using this common standard for embedding metadata about achievements into badges helps consumers understand what badged accomplishments mean, and in addition, also enables automatic verification of authenticity. This means admissions offices, hiring managers, and others who examine credentials can shift their attention from calling phone number after phone number to verify qualifications, to determining whether or not those qualifications help represent someone who is a good fit for their mission and goals.

The Potential of an Open Badge Ecosystem

The well-recognized credentials of today’s education system, from the high school diploma to the PhD, are familiar to the public and are at home in resumes and applications for all sorts of positions. But there is also a wide range of learning providers operating outside the accredited education environment that offer youth and adults learning experiences that represent important components of people’s educational journeys. These providers often award their own paper certificates for the various accomplishments that they measure, but the public has little to no familiarity with these credentials, and so they are often not represented as prominently as the traditional components of an individual’s experience. For learning providers, Open Badges represent an opportunity for organizations both inside and outside the formal education sector to contribute richer information about badge earners’ experiences in a way that can help them better represent themselves in conversations about their qualifications.

We believe all stakeholders in the education ecosystem could be better served by providing and accessing more detailed information about achievements, especially as the need to connect learning across different environments, formal and informal, from a young age through learners’ careers, increases.

Concentric Sky is incubating multiple projects to serve all sides of the Open Badges ecosystem. From making enterprise-level issuing tools available to even the smallest learning program, to the mobile Badgr badge repository available for iOS and Android, to the BadgeRank website that aims to begin crowdsourcing information about the value of badges, the idea is to make it possible for many issuers and earners to better tell their own stories where it counts, and for their audiences to understand them.

We’re excited to participate in growing the ecosystem and helping learners access and receive the benefits of participating in a wide variety of learning experiences.

Samantha Kalita // September 23, 2014

Use the 5 W’s to Create an Excellent User Experience

Woman using device

The 5 W’s—the fundamental writing mnemonic we learned in grade school—can help us clearly communicate a story to our audience. They remind us to tell the key points of a story: Who, What, When, Where, and Why. This mnemonic method can also be used as a tool to guide successful user experience design.


Who is your target?

One of the most important considerations in any design is to know who you are designing for. Think about that user’s needs, limitations, and expectations for all aspects of the experience.

Consider what language you want to use.

For example, if you’re designing a learning tool for a student, make sure the vocabulary matches their reading level. Also consider what tone and voice you want to use. Do you want to be authoritative or chummy? If you have a culturally diverse customer base, it may be important to offer multi-lingual experiences.

Empathize with your user.

Do they have any disabilities (e.g., color blindness, poor hearing, poor vision, etc.)? What information and in what format (e.g., text, image, video, audio) do they want? Always try to make their lives easier.

Who does this well:


Their target is schools which means they have optimized their designs for both students and educators.

Use the 5 W’s to Create an Excellent User Experience image


What is your goal?

This is the “Raison d’être” as the French say. It’s the reason for existence. Keep this foremost in your thoughts. Let it dictate every decision you make. After all, it is the whole point of why you’re creating this experience.

State your goals.

Make sure everyone is clear about those goals from the beginning. Know what your goals are before you start designing. When designing, constantly ask yourself, “Is this helping the user achieve their goal?” If the answer is no, consider excluding it from your design.

Measure your progress.

Don’t forget to establish metrics to evaluate how well you are doing. Create measurable and achievable goals. If your goal is to increase email subscriptions, state that you want to increase conversions by a particular percentage within a window of time.

Who does this well:

Silicon Shire

Their goal is to promote technology businesses in the Eugene-Springfield metropolitan area.

Use the 5 W’s to Create an Excellent User Experience image


When should you show CTA’s?

Make call to actions (CTA’s) relevant and easy. Provide users with the context and information that they need to take action. Don’t make conversions a battle. Guide them through the actions they need to take.

Make it relevant.

Be context-sensitive. Both the content and the action should be aligned. For example, if you mention that they can reach out to customer service if they have additional questions, enable them to contact customer service directly.

Be strategic.

Often you’ll have several goals for users. Prioritize those goals and be selective about when you present them to users. Don’t list all the possible actions that a user can take at one time. Distribute them across your experience. Give the highest priority action the most visibility. Provide context and support for taking action. Make sure action language is clear and concise.

Who does this well:

Hatch Canada

Since their primary goal is to have parents sign their children up for after-school programming instruction, this appears as the dominant CTA above the fold. Their secondary goal is to have users contact the instructor. This appears below the primary CTA and in a more modest styling.

Use the 5 W’s to Create an Excellent User Experience image


What platform makes sense?

To answer this question, you need to have previously answered “Who” and “What.” It’s important to understand your users’ behavior as well as the benefits and limitations to the networks and platforms they’re on.

Know where your users are and aren’t.

You have this great social networking plan. It will be the next viral sensation. Everyone will be tweeting about it for months. Except your target isn’t on Twitter, they’re on LinkedIn. Make sure to focus your energy on platforms where you get the most bang for your buck.

Know how to best leverage platforms.

Every thing isn’t designed to do everything. Don’t design a mobile application if all you really need is a mobile optimized website. Pinterest is great for image sharing. Twitter is great for short thoughts. Understand each platform’s strengths and weaknesses and decide which matches your goals best.

Who does this well:


Since this service helps users track their favorite drinks, it’s important that users can access it on the go, wherever they are. They chose to create a mobile app which is ideal for on-the-go access.

Use the 5 W’s to Create an Excellent User Experience image


Why should users act?

Users are savvy. It’s important to demonstrate how you will benefit their lives both from a logical and emotional point of view. There’s a lot of fish in the sea, so don’t get lost in the current. You might have the best service or product, but if you can’t communicate that to your user, you will have lost them.

Address their problems.

Life is complicated. Make it better. Show them that you understand their problems and how you’ll make it better.

Show your value.

This can be done by illustrating cost savings, sharing third party testimonials, displaying comparison charts, etc. Whatever approach you take, make sure you demonstrate your worth.

Who does this well:

Mama Seeds

To establish their credibility as pregnancy experts, they identified well-known pregnancy resources who have leveraged their content.

Use the 5 W’s to Create an Excellent User Experience image

At its core an excellent user experience is achieved by understanding your users and making their lives easier.

What mnemonics have you used in your design process?

Kurt Mueller // June 4, 2014

Creating a Multipage PDF Document from UIViews in iOS

blog-image-1.jpg image

We created an educational children’s app for iPad that includes a photo scrapbook. Students earn stickers and animal photos for the scrapbook as they use the app, and they are given a few minutes to interact with the scrapbook at the end of each learning session. Since the scrapbook serves as both an indicator of student progress and a fun reward for student effort, we provide an in-app mechanism to export the scrapbook to PDF format so that students will have something tangible to take away from their time with the app (in addition, of course, to increased knowledge and understanding!). In this post, we’ll explore the steps necessary to take a set of UIViews, each representing an individual scrapbook page, and create a multipage PDF document that can be emailed or printed directly from iOS.

Scrapbook Overview

A typical scrapbook contains many pages, each displayed side by side with another to look like a physical book. Here’s an example of two facing pages with photos and stickers, as they appear to students in the app:

Creating a Multipage PDF Document from UIViews in iOS image

Each page is represented by a class called ScrapbookPage, and contains one or two photos and an arbitrary number of stickers (students are free to move stickers around in the scrapbook). We present two ScrapbookPages side by side using a UIPageViewController, which provides really nice page turning interactivity.

We want our PDF output to accurately represent the in-app scrapbook, so we will show two ScrapbookPages side by side on each page of the PDF output. This means that the PDF document needs to be in landscape orientation. We also need to accommodate the possibility of an odd number of ScrapbookPages, in which case the last page of our PDF output will display a single ScrapbookPage rather than two.

Generating PDF Data from a UIView Subclass

Before we tackle the problem of making a multipage document from many ScrapbookPages, let’s start with the more basic task of turning a single UIView subclass into PDF data. In a later section we’ll expand on the basic task by adding logic to loop through an array of ScrapbookPages and create facing page views.

In this basic example, we’ll create a UIView instance that takes up the full screen in landscape orientation, with size 1024x768:

UIView* testView = [[UIView alloc] initWithFrame:CGRectMake(0.0f, 0.0f, 1024.0f, 768.0f)];

Next we create a mutable data object to hold our PDF-formatted output data:

NSMutableData* pdfData = [NSMutableData data];

Then we create a PDF-based graphics context, with our mutable data object as the target:

UIGraphicsBeginPDFContextToData(pdfData, CGRectMake(0.0f, 0.0f, 792.0f, 612.0f), nil);

Note that 792x612 is the size in pixels of a standard 8.5x11” page at 72dpi, in landscape mode. We are passing nil as the last parameter, which could instead be an NSDictionary with additional info for the generated PDF output, such as author name.

Then we mark the beginning of a new page in the PDF output and get the CGContextRef for our PDF drawing:

CGContextRef pdfContext = UIGraphicsGetCurrentContext();

Remember that our UIView has size 1024x768, and our PDF page has size 792x612. To make sure that all of the UIView is visible in the PDF output, we must scale the context appropriately. 792 / 1023 = 0.733, which is our scaling factor:

CGContextScaleCTM(pdfContext, 0.773f, 0.773f);

Now that all setup is done, we finally get to the exciting part: rendering the UIView’s layer into the PDF context:

[testView.layer renderInContext:pdfContext];

To finish up, we end the PDF context:


At this point, we have a an NSData object (pdfData) that contains a PDF representation of our UIView (testView). Here’s all the code from this example together:

UIView* testView = [[UIView alloc] initWithFrame:CGRectMake(0.0f, 0.0f, 1024.0f, 768.0f)];
NSMutableData* pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, CGRectMake(0.0f, 0.0f, 792.0f, 612.0f), nil);
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
CGContextScaleCTM(pdfContext, 0.773f, 0.773f);
[testView.layer renderInContext:pdfContext];

Creating a Single Full-screen UIView Subclass Instance from Two ScrapbookPages

In the previous section, we learned how to render a full-screen UIView into a PDF context. We omitted something important, however: the UIView was empty, with nothing in it. That will make for a pretty boring PDF. In this section we’ll see how to add two facing ScrapbookPages to a UIView subclass.

In the app, each ScrapbookPage displayed onscreen has a size of 475x577. Two of these fit side by side on a full-screen landscape page with space between them and a border around the outside, like so:

Creating a Multipage PDF Document from UIViews in iOS image

As mentioned previously, the scrapbook facing pages view in the app is controlled by a UIPageViewController. This provides a very polished and natural simulation of an actual book, with realistic page turning animations as you drag your finger over the pages. This is great for interactive use of the scrapbook, but it’s not necessary for rendering of static pages, and in fact would probably add a lot of CPU and memory overhead to the process. Instead of using a UIPageViewController for the PDF rendering process, we created a simple UIView subclass called ScrapbookOpposingPagesPrintingView. This class manages layout of two ScrapbookPages, on top of a background UIImageView representing an open book.

How do the individual ScrapbookPages get laid out? Before we proceed, we need to introduce another class: ScrapbookPagePrintingView. This UIView subclass takes in its init method a ScrapbookPage object, which is a simple NSObject subclass describing the photos and stickers on a page, and does the actual layout of the photos and stickers described in the ScrapbookPage object by creating UIImageViews for each photo and sticker. The ScrapbookPagePrintingView adds these UIImageViews as subviews to itself. We will not describe this class further, as its internal operations are unimportant to the present discussion.

Here’s what we’ll see in the code below: we have two ScrapbookPage objects, and we create ScrapbookPagePrintingView objects from each one. We add two of these ScrapbookPagePrintingViews to our ScrapbookOpposingPagesPrintingView, which is then ready for rendering to PDF. This diagram shows the relationship between ScrapbookPagePrintingViews and an enclosing ScrapbookOpposingPagesPrintingView:

Creating a Multipage PDF Document from UIViews in iOS image

This is ScrapbookOpposingPagesPrintingView’s interface (.h) file:

@class ScrapbookPage;
@interface ScrapbookOpposingPagesPrintingView : UIView
- (void)showLeftScrapbookPage:(ScrapbookPage*)leftPage rightScrapbookPage:(ScrapbookPage*)rightPage;

And here is ScrapbookOpposingPagesPrintingView’s implementation (.m) file:

#import "ScrapbookOpposingPagesPrintingView.h"
#import "ScrapbookPagePrintingView.h"
#import "ScrapbookPage.h"

static const CGRect ScrapbookOpposingPagesLeftPageFrame = {36.0f, 92.0f, 475.0f, 577.0f};
static const CGRect ScrapbookOpposingPagesRightPageFrame = {514.0f, 92.0f, 475.0f, 577.0f};

@implementation ScrapbookOpposingPagesPrintingView 

- (id)init {
    CGRect fullScreenFrame = CGRectMake(0.0f, 0.0f, 1024.0f, 768.0f);
    self = [super initWithFrame:fullScreenFrame];
    if (self) {
        self.backgroundColor = [UIColor whiteColor];
        UIImageView* backgroundImageView = [[UIImageView alloc] initWithFrame:myFrame];
        [backgroundImageView setBackgroundColor:[UIColor clearColor]];
        [backgroundImageView setContentMode:UIViewContentModeCenter];
        [backgroundImageView setImage:
          [UIImage imageNamed:@"scrapbook-print-bg"]];
        [self addSubview:backgroundImageView];
    return self;

- (void)showLeftScrapbookPage:(ScrapbookPage*)leftPage rightScrapbookPage:(ScrapbookPage*)rightPage {
    if (leftPage != nil) {
        ScrapbookPagePrintingView* leftPageView =
          [[ScrapbookPagePrintingView alloc]
        [self addSubview:leftPageView];

    if (rightPage != nil) {
        ScrapbookPagePrintingView* rightPageView =
          [[ScrapbookPagePrintingView alloc]
        [self addSubview:rightPageView];

In its init method, we call [super initWithFrame:] and pass a full-screen landscape orientation frame. Then we add a UIImageView containing the image representing an open book, which will be behind the two facing pages:

Creating a Multipage PDF Document from UIViews in iOS image

In the showLeftScrapbookPage:rightScrapbookPage: method, we accept one or two ScrapbookPage objects and create ScrapbookPagePrintingViews from them, then add them as subviews to self. Note that we pass different statically-defined CGRect frames to the ScrapbookPagePrintingView init method for left and right pages, to make sure that the resulting ScrapbookPagePrintingViews show up on the left or right side when added as subviews to self. These frames have different origins for left and right, but the same size, since each ScrapbookPagePrintingView is the same size.

Instantiating a ScrapbookOpposingPagesPrintingView and passing one or two ScrapbookPages to showLeftScrapbookPage:rightScrapbookPage: results in a full-screen landscape orientation ScrapbookOpposingPagesPrintingView with an open book background image and two ScrapbookPages:

Creating a Multipage PDF Document from UIViews in iOS image

Putting it All Together

Now that we know how to generate PDF data from a single UIView or UIView subclass, and we know how to create a ScrapbookOpposingPagesPrintingView class containing two facing ScrapbookPages, we will add logic to iterate over an array of ScrapbookPages to create as many ScrapbookOpposingPagesPrintingViews as we need, noting that the last ScrapbookOpposingPagesPrintingView may only have a single ScrapbookPage on it if we have an odd number of ScrapbookPages.

To accomplish this, we need two methods: one that prepares the pdfData mutable data object to hold the PDF output and iterates through the ScrapbookPages, and one that creates and renders a ScrapbookOpposingPagesPrintingView for each pair of pages. Here’s the first method:

- (NSData*)scrapbookPdfDataForScrapbookPages:(NSArray*)scrapbookPages {
    NSMutableData* pdfData = [NSMutableData data];
    UIGraphicsBeginPDFContextToData(pdfData, CGRectMake(0.0f, 0.0f, 792.0f, 612.0f), nil);

    if (scrapbookPages.count > 0) {
        NSUInteger pageIndex = 0;
        do {
            ScrapbookOpposingPagesPrintingView* printingView =
              [[ScrapbookOpposingPagesPrintingView alloc] init];

            ScrapbookPage* leftPage = scrapbookPages[pageIndex];
            // only include right page if it exists
            ScrapbookPage* rightPage =
              pageIndex + 1 < scrapbookPages.count ?
              scrapbookPages[pageIndex + 1] :
            [self addPrintingViewPDF:printingView];
            // take two pages at a time
            pageIndex += 2;
        while (pageIndex < scrapbookPages.count);


    return pdfData;

And the method that performs the rendering to PDF for each ScrapbookOpposingPagesPrintingView, called by the method above, is:

- (void)addPrintingViewPDF:(UIView*)printingView {
    // Mark the beginning of a new page.
    CGContextRef pdfContext = UIGraphicsGetCurrentContext();

    // Scale down from 1024x768 to fit paper output (792x612; 792/1024 = 0.773)
    CGContextScaleCTM(pdfContext, 0.773f, 0.773f);
    [printingView.layer renderInContext:pdfContext];

Calling scrapbookPdfDataForScrapbookPages: with an array of ScrapbookPages results in an NSData object containing a PDF representation of the entire scrapbook, which can be used in many ways. In the app, we enable printing of the PDF output directly from the app via AirPrint, and also emailing it as a file attachment. Perhaps we’ll cover those two mechanisms in another blog post.

Daniel Wilson // February 27, 2014

Custom Django Widget

blog-image-23.jpg image

Writing a custom admin widget can be a little tricky, due to the way that form data is handled. In order to minimize your trouble, I would highly recommend extending an existing widget if at all possible. It may save you a lot of trouble, since Django has a lot of custom labeling and logic to create, populate, validate, and submit admin forms. Once you have a functional baseline, any custom behavior can overwrite the defaults.

This example comes from a project with a purchase request model. The admin site had a model changeform, which contained information about the requester, the requested item, and so on. We also had a textfield which was a human-readable phrase describing the purchase request. This field needed to have a button near it which would pull information from other parts of the form, and could then be manually edited and submitted to the database normally.

In this example, I will walk you through the creation of a custom formfield widget and how to get it properly plugged into the admin. The widget itself is just a textarea with a button, so all we need to do is take the HTML output from the textarea and append it. The render() method accepts the currently instantiated widget object (self), the name of the form field using the widget (name), the current contents of the html textarea (value), and any attributes passed to it by the widget class; it is responsible for returning a valid HTML string describing the widget, so that is what we will construct.

One thing you need to know is that each purchase request can be associated with an asset. We’re going to want to know which asset the purchase request is linked to, so we start off by creating a purchase request/asset dictionary. Thus:

# In apps/purchaserequest/widgets.py
class PRNotesWidget(forms.widgets.Textarea):

 def render(self, name, value, attrs=None):
   # Build a dictionary linking purchase requests
   # with their corresponding assets
   pr_asset_dict = {int(asset.purchaserequest_id):
                    int(asset.asset_number) for asset in

   # Start with the textarea; and wrap it in a script
   # containing the logic to populate it, and the
   # button to trigger the script.
   html = super(PRNotesWidget, self).render(name, value,
   html = """
     <script type="text/javascript">
       var populatePRNotes = function() {
         # Use jQuery to select the fields that will
         # populate this field
         var qty = document
         var item = document
         var who = document

         # Get the id of the purchase request
         # from the form
         var pr_id = document

         # Careful here: we're ending the string,
         # inserting the dictionary we built earlier,
         # and then continuing our string.
         var pr_asset_dict = """ +
                             str(pr_asset_dict) + """;

         # Now access the dictionary using the purchase
         # request id as a key to get the corresponding
         # asset (if there is one)
         var pr_asset = pr_asset_dict[pr_id];

         # Build the text to display in the form field.
         var display_text = qty + ' ' + item +
                            ' for ' + who
         if (pr_asset) {
           display_text += ' (Asset #' + pr_asset + ')'
                 .innerHTML = display_text;
     """ + html + """
     # This button will trigger the script's function
     # and fill in the field.
     <button type="button" onclick="populatePRNotes()">
       Create PR Notes

   # Since we are using string concatenation, we need to
   # mark it as safe in order for it to be treated as
   # html code.
   return mark_safe(html);

Now that our widget is defined, all we need to do is link it to an admin field. We do this by setting the formfield widget to PRNotesWidget like so:

# In apps/purchaserequest/fields.py
from purchaserequest.widgets import PRNotesWidget

class PRNotesField(models.TextField):

  def formfield(self, **kwargs):
    kwargs['widget'] = PRNotesWidget
    return super(PRNotesField, self).formfield(**kwargs)

The field needs to be explicitly specified in a form:

# In apps/purchaserequest/forms.py
from purchaserequest.fields import PRNotesField

class PRAdminForm(forms.ModelForm): 
  # The form for the purchase request model should
  # use our custom field
  accounting_notes = PRNotesField()

And then, of course, we need to make sure we’re using that form in the admin:

# In apps/purchaserequest/admin.py
from purchaserequest.forms import PRAdminForm

class PRAdmin(admin.ModelAdmin):
  # The purchase request admin should be using the
  # custom admin form
  form = PRAdminForm

You’ll note that I’ve split the admin, form, field, and widget each into files with their respective names. This is only really necessary if you have a large project with lots of custom widgets and fields. However, this structure is preferable both for being prepared for the future, as well as to understand the hierarchy and flow of the app.

This was my first attempt at a custom widget, but a number of improvements could easily be made from here. For example, it is not necessary to create a custom field as I did, since Django provides a shortcut in a form’s Meta class to define a “widgets” dictionary with field names as keys and widgets as values. You’ll also notice the fact that I make a query to the database with pr_asset_dict, and dump the entire dictionary to Javascript. A better way to do this would be to make an AJAX call to the database and retrieve only the asset that I want. While the example presented here might be the most easily understood implementation, there is always room for optimization.

Daniel Wilson // January 2, 2014

Data Migrations with South and Django

blog-image-32.jpg image

The workflow for a data migration in Django with South migrations is relatively simple, and fairly well-documented. If you have a model that you want to modify, you’ll want to

  1. define your new fields and create a schemamigration;
  2. create a blank migration and access the ORM dictionary to write your data migration, which moves the data from the old fields to the new; and
  3. remove the old fields and create another schemamigration to say goodbye to those unsalted passwords forever.

The workflow is simple enough to understand, but if you want to do anything more complicated than break your names into first_name and last_name, you’ll need some more tools. Recently, I ran into a situation where I needed to condense two entire models into a single super-model that contained all fields from both of the originals. To illustrate, I will first give a simple, silly example. If you are neither of these things, feel free to skip to the latter section in which I lay out how to write an epic-level data migration.

Silly Example: Hybridizing Animals

First, lay out the models. Ducks and beavers each get a name, a tail type, and a boolean for their bill (by default, beavers don’t have one). For simplicity’s sake, put both of these in an “animals” app within models.py

class Duck(models.model):
  name = models.CharField(max_length=32)
  weight = models.DecimalField()
  tail = models.CharField(default="feathered", max_length=32)
  bill = models.BooleanField(default=True)

class Beaver(models.model):
  name = models.CharField(max_length=32) 
  weight = models.DecimalField()
  tail = models.CharField(default="broad and featherless", max_length=32)
  bill = models.BooleanField(default=False)

With that taken care of, run the initial migration

./manage.py schemamigration --initial animals

Then, create some animals in the database. Registering the app in the Django admin makes creating animals easy.

Time to get hybridizing! The three steps are schemamigration, datamigration, schemamigration, so start by creating the hybrid animal class. This goes in animals/models.py with the other two. Give it the same fields as before, but do not specify defaults because these need to come from the inherited classes, and they’re all required by default anyway.

class Platypus(models.model):
  name = models.CharField(max_length=65)
  weight = models.DecimalField()
  tail = models.CharField(max_length=32)
  bill = models.BooleanField()

New model added; run the schemamigration:

./manage.py schemamigration animals --auto

To set up the datamigration, begin by creating an empty migration. Don’t forget to give it a name:

./manage.py datamigration animals hybridize_ducks_and_beavers

Inside the migration file, write a forwards function:

def forwards(self, orm):
  for duck in orm['animals.duck'].objects.all():
    beaver = orm[‘animals.beaver’].objects.get(id=duck.id)
    form animals.models import Platypus
    platypus = Platypus (
      name = duck.name + “-“ + beaver.name
      weight = (duck.weight + beaver.weight) / 2
      tail = beaver.tail
      bill = duck.bill

A couple of things to note here:

  1. The script loops through every duck in the list of ducks. It matches every duck with a beaver by grabbing the beaver that has the same id as each duck. (It assumes, of course, that there is a matching beaver for each duck.)
  2. Since there are not currently any Platypuses registered, they do not appear in the ORM. Rather than referencing existing models – as done with ducks and beavers – the script needs to import Platypus from the animals models.py file, and create a new instance of the model each time it iterates through the loop.

The new platypuses have hyphenated names. Their weights are an average of their parents, and they get their tails and bills from their beaver and duck parents, respectively:

The genetic experimentation is complete, all that is left is to remove the old models. In animals/models.py, delete all the duck and beaver code, and run

./manage.py schemamigration animals --auto

This will delete the old tables, leaving only platypuses!

Serious Example: Merging Django’s auth.user Model With a Custom User Model

Django’s default user model automatically provides a variety of commonly-used fields, such as username, email, password, is_staff, last_login, and so on. With the release of Django 1.5, it is now relatively simple to write a user model which encapsulates these fields as well as any other custom information that needs to stored about the user. However, prior to this, it was necessary to create a separate, custom table to contain any extra information, and link it via a one-to-one relationship to the auth.user table. This is the situation I was confronted with on a recent project, and when the time came to upgrade the project to Django 1.5, it made sense to combine the two user tables into one larger table to simplify storage and referencing. The procedure helped solidify my understanding of Django user models as well as South migrations, and I hope it helps you as well!

To begin, the auth_user table contained the columns: id, username, first_name, last_name, email, password, is_staff, is_active, is_superuser, last_login, and date_joined. Additionally, the auth_user model had many-to-many relationships with tables called “groups” and “user_permissions”. The custom user model was in an app called members. Thus, the members_user model contained the columns: user_ptr_id (the link to auth_user), user_type, birthdate, bio, email_prefs, hide_onboarding, cancel_state, cancel_reason, and photo. Additionally, the members_user model had three many-to-many fields: each user had favorite_comments, favorite_journal_entries, and favorite_videos.

Ultimately, I wanted all of this data to be encapsulated in a new model called “Profile” in the members app. First, I created the new Profile class in my members/models.py file. It was a duplicate of the existing members_user model, except that it also inherited from django.contrib.auth.models.AbstractUser. This is the mixin used by the regular auth.user model, and granted my Profile model all of the usual user fields (password, username, etc.). Then, I ran

./manage.py schemamigration —auto

to generate the blank model, ready to be populated.

The tricky part is the data migration. In order to coerce the data into a single table, it is necessary to loop through each auth_user; and each time:

  1. create a new profile object,
  2. insert the auth_user data,
  3. create new many-to-many tables from auth_user,
  4. insert the members_user data, and
  5. create new many-to-many tables from members_user.

First, run

./manage.py datamigration members migrate_userdata_to_profiledata

Next, the data migration forwards function:

class Migration(DataMigration):

  def forwards(self, orm):
    "Write your forwards methods here."
    # Note: Remember to use orm['appname.ModelName']
    # rather than "from appname.models..."

    for authuser in orm['auth.user'].objects.all():

      # Create a new members.Profile for every existing auth.User. I 
      # needed to import Profile in order to create new instances of it.
      from members.models import Profile
      memberprofile = Profile (

        # Transfer the many-to-many tables from auth_user
        for group in authuser.groups.all():
        for permission in authuser.user_permissions.all():

          # If there is an associated members.User,
          # add those fields to the members.Profile
          memberuser = orm['members.user'].objects.get(user_ptr_id=authuser.id)

          # Transfer the m2m fields from user to profile
          for comment in memberuser.favorite_comments.all():
          for journalentry in memberuser.favorite_journal_entries.all():
          for video in memberuser.favorite_videos.all():

        # In case there is a problem getting the related
        # members_user model, I used pdb to diagnose the issue.
        except orm['members.user'].DoesNotExist:
        except Exception as e:
          import pdb; pdb.set_trace()

        # All done! Save, and move on to the next user.

After performing a data migration this big, it’s important to check the actual data for consistency. Indeed, as I wrote this function, I performed the data migration, identified an error, and deleted the table data and migration many times.

The last step was to delete the old members.user model and run

./manage.py schemamigration members --auto

Transition complete; all user data is in a single table!

Concentric Sky uses Django as one of our core technologies. With Django, we build backends for mobile applications, craft custom web applications and deploy data-driven websites. We’ve written a number of articles on Django, use the tags to find more.

The Register-Guard // October 7, 2013

Creativity, Beauty Mingle at Concentric Sky

creativity-beauty-concentricsky1.jpg image

What does a huge dome that pulsates with colored LED lights when music is playing have to do with keeping the creativity flowing at Concentric Sky, a Eugene Web development firm?

Everything, it turns out.

In late August, Concentric Sky developer Yona Appletree — the dome’s main visionary — his boss, Wayne Skipper, and several co-workers set up the 30-foot-high dome in Nevada’s Black Rock Desert for the annual Burning Man arts festival.

It created quite a splash, including being named “Best. LED. Dome. Ever” by CNET News, a computing and technology news service.

Over six or seven months, Appletree, with help from Skipper and other Concentric Sky developers, designed the dome, built its hardware and programmed its computers.

The dome wasn’t a company project, but it had the full endorsement — including substantial financial backing — of Skipper, who founded Concentric Sky in 2005.

Why spend time and resources on something that doesn’t directly contribute to the company’s bottom line?

“The bottom line is only one important metric of a successful business,” Skipper said. “It’s important to keep employees happy and motivated in order to retain the type of talent that you need to innovate.”

Skipper himself has an eclectic work history, starting off in avionics systems for the Navy, then silicon chip manufacturing, with stints at Texas Instruments, Applied Materials and Dell. He left Dell to study classical art and architecture in Europe, then turned his attention to software design.

“Creative stimulation is the seed from which innovation grows,” Skipper said. “For this reason, I surround my team with art and thought provoking displays meant to keep the creative juices flowing. I believe we can extrapolate this to the public at large, and for this reason I’m a big proponent of public art. Creativity is good for everyone.”

Concentric Sky is in the business of developing sophisticated Websites and mobile applications for a range of clients, including National Geographic , Encyclopaedia Britannica and the American Park Network .

In addition to supporting outside projects that encourage developers and designers to think outside the box, Concentric Sky takes great pains in selecting its clients, said Cale Bruckner, vice president of the 55-employee firm.

“We spend a lot of time reviewing projects that come to Concentric Sky to choose projects that interest the people here,” he said.

One such project is the “Oh Ranger! ParkFinder” application for iPhones and iPad, which enables people to find national and state parks to visit and learn about the activities at each park. Concentric Sky has worked over the past year with American Park Network building a back-end content management system to manage all the content to support the application, Bruckner said.

“We have a lot of people here that are outdoor enthusiasts,” he said. “It’s easy to inspire people to do their best work when you have projects like the national ParkFinder.”

Sometimes work for a client can turn into a little business of its own. That is the case with Emergent Health Care Solutions, a new Eugene company founded earlier this year by Dr. Dan Fitzpatrick, a surgeon at Slocum Orthopedics; Rex Hughes, owner of Hughes Fire Equipment in Springfield; and Concentric Sky, which has a small ownership stake in the company, Bruckner said.

“They came to us with a broad idea, and we helped them distill it into something that could be implemented in a year,” he said.

The idea is an app that provides doctors secure access to NextGen, the Electronic Health Record used at Slocum.

Instead of having to sit down at the NextGen terminal to access patient charts, doctors can pull up those records on their iPads as they move from room to room. A doctor even could be at his kid’s soccer practice, for example, take a call and look up a patient’s record through a secure, remote connection, Bruckner said.

Emergent Health Care Solutions charges a site license fee, and every doctor at the site who wants to use the app also pays a fee, “so it’s potentially a very large opportunity,” Bruckner said.

As for Concentric Sky’s Radiance Dome, it may soon make an appearance closer to home, perhaps as a fundraiser for the Science Factory, said Skipper, who serves on the museum’s board, or at an event with Oregon Museum of Science and Industry in Portland.

The ultimate goal is to create a curriculum around simple wiring, electronics and computer programming that in Skipper’s opinion “is something that children could do.”

“The dome project is close to my heart, merging both hardware and software in innovative new ways,” he said. “We created a beautiful display that brought joy to hundreds of people.”

Watch a video of the Concentric Sky dome at https://goo.gl/rNyP0E

By Sherri Buri McDonald Appeared in print in The Register-Guard: Monday, OCT. 7, 2013, Page K5 View original article

Arion Sprague // July 5, 2013

Power in Python’s Hidden New

blog-image-4.jpg image

__new__ is one of the most easily abused features in Python. It’s obscure, riddled with pitfalls, and almost every use case I’ve found for it has been better served by another of Python’s many tools. However, when you do need __new__, it’s incredibly powerful and invaluable to understand.

The predominant use case for __new__ is in metaclasses. Metaclasses are complex enough to merit their own article, so I don’t touch on them here. If you already understand metaclasses, great. If not, don’t worry; understanding how Python creates objects is valuable regardless.


With the proliferation of class-based languages, constructors are likely the most popular method for instantiating objects.


class StandardClass {
    private int x;
    public StandardClass() {
        this.x = 5;

    public int getX() {
        return this.x;


class StandardClass(object):
    def __init__(self, x):
        self.x = x

Even JavaScript, a protypical language, has object constructors via the new keyword.

function StandardClass(x) {
    this.x = x;

var standard = new StandardClass(5);
alert(standard.x == 5);

Newer is Better

In Python, as well as many other languages, there are two steps to object instantiation:

The New Step

Before you can access an object, it must first be created. This is not the constructor. In the above examples, we use this or self to reference an object in the constructor; the object had already been created by then. The New Step creates the object before it is passed to the constructor. This generally involves allocating space in memory and/or whatever language specific actions newing-up an object requires.

The Constructor Step

Here, the newed-up object is passed to the constructor. In Python, this is when __init__ is called.

Python Object Creation

This is the normal way to instantiate a StandardClass object:

standard = StandardClass(5)
standard.x == 5

StandardClass(5) is the normal instance creation syntax for Python. It performs the New Step followed by the Constructor Step for us. Python also allows us to deconstruct this process:

# New Step
newed_up_standard = object.__new__(StandardClass)
type(newed_up_standard) is StandardClass
hasattr(newed_up_standard,'x') is False

# Constructor Step
StandardClass.__init__(newed_up_standard, 5)
newed_up_standard.x == 5

object.__new__ is the default New Step for object instantiation. It’s what creates an instance from a class. This happens implicitly as the first part of StandardClass(5).

Notice, x is not set until after newed_up_standard is run through __init__. This is because object.__new__ doesn’t call __init__. They are disparate functions. If we wanted to perform checks on newed_up_standard or manipulate it before the constructor is run, we could. However, explicitely calling the New Step followed by Constructor Step is neither clean nor scalable. Fortunately, there is an easy way.

Controlling New with __new__

Python allows us to override the New Step of any object via the __new__ magic method.

class NewedBaseCheck(object):
    def __new__(cls):
        obj = super(NewedBaseCheck,cls).__new__(cls)
        obj._from_base_class = type(obj) == NewedBaseCheck
        return obj
    def __init__(self):
        self.x = 5

newed = NewedBaseCheck()
newed.x == 5
newed._from_base_class is True

__new__ takes a class instead of an instance as the first argument. Since it creates an instance, that makes sense. super(NewedClass, cls).__new__(cls) is very important. We don’t want to call object.__new__ directly; again, you’ll see why later.

Why is from_base_class defined in __new__ instead of __init__? It’s metadata about object creation, which makes more semantic sense in __new__. However, if you really wanted to, you could place define _from_base_class:

class StandardBaseCheck(object):
    def __init__(self):
        self.x = 5
        self._from_base_class == type(self) == StandardBaseCheck

standard_base_check = StandardBaseCheck()
standard_base_check.x == 5
standard_base_check._from_base_class is True

There is a major behavioral difference between NewBaseCheck and StandardBaseCheck in how they handle inheritance:

class SubNewedBaseCheck(NewedBaseCheck):
    def __init__(self):
        self.x = 9

subnewed = SubNewedBaseCheck()
subnewed.x == 9
subnewed._from_base_class is False

class SubStandardBaseCheck(StandardBaseCheck):
    def __init__(self):
        self.x = 9

substandard_base_check = SubStandardBaseCheck()
substandard_base_check.x == 9
hasattr(substandard_base_check,"_from_base_class") is False

Because we failed to call super(...).__init__ in the constructors, _from_base_class is never set.

__new__ and __init__

Up until now, classes defining both __init__ and __new__ had no-argument constructors. Adding arguments has a few pitfalls to watch out for. We’ll modify NewBaseCheck:

class NewedBaseCheck(object):
    def __new__(cls):
        obj = super(NewedBaseCheck,cls).__new__(cls)
        obj._from_base_class = type(obj) == NewedBaseCheck
        return obj

    def __init__(self, x):
        self.x = x

except TypeError:
    print True

Instantiating a new NewedBaseCheck throws a TypeError. NewedBaseCheck(5) first calls NewBaseCheck.__new__(NewBaseCheck, 5). Since __new__ takes only one argument, Python complains. Let’s fix this:

class NewedBaseCheck(object):
    def __new__(cls, x):
        obj = super(NewedBaseCheck,cls).__new__(cls)
        obj._from_base_class = type(obj) == NewedBaseCheck
        return obj

    def __init__(self, x):
        self.x = x

newed = NewedBaseCheck(5)
newed.x == 5

There are still problems with subclassing:

class SubNewedBaseCheck(NewedBaseCheck):
    def __init__(self, x, y):
        self.x = x
        self.y = y

except TypeError:
    print True

We get the same TypeError as above; __new__ takes cls and x, and we’re trying to pass in cls, x, and y. The generic fix is fairly simple:

class NewedBaseCheck(object):
    def __new__(cls, *args, **kwargs):
        obj = super(NewedBaseCheck,cls).__new__(cls)
        obj._from_base_class = type(obj) == NewedBaseCheck
        return obj

    def __init__(self, x):
        self.x = x

newed = NewedBaseCheck(5)
newed.x == 5

subnewed = SubNewedBaseCheck(5,6)
subnewed.x == 5
subnewed.y == 6

Unless you have a good reason otherwise, always define __new__ with *args and **kwargs.

The Real Power of __new__

__new__ is incredibly powerful (and dangerous) because you manually return an object. There are no limitations to the type of object you return.

class GimmeFive(object):
    def __new__(cls, *args, **kwargs)):
        return 5

GimmeFive() == 5

If __new__ doesn’t return an instance of the class it’s bound to (e.g. GimmeFive), it skips the Constructor Step entirely:

class GimmeFive(object):
    def __new__(cls, *args, **kwargs):
        return 5

    def __init__(self,x):
        self.x = x

five = GimmeFive()
five == 5
isinstance(five,int) is True
hasattr(five, "x") is False

That makes sense: __init__ will throw an error if passed anything but an instance of GimmeFive, or a subclass, for self. Knowing all this, we can easily define Python’s object creation process:

def instantiate(cls, *args, **kwargs):
    obj = cls.__new__(cls, *args, **kwargs)
    if isinstance(obj,cls):
        cls.__init__(obj, *args, **kwargs)
    return obj

instantiate(GimmeFive) == 5
newed = instantiate(NewedBaseCheck, 5)
type(newed) == NewedBaseCheck
newed.x == 5

Don’t Do This. Ever.

While experimenting for this post I created a monster that, like Dr. Frankenstein, I will share with the world. It is a great example of how horrifically __new__ can be abused. (Seriously, don’t ever do this.)

class A(object):
    def __new__(cls):
        return super(A,cls).__new__(B)
    def __init__(self):
        self.name = "A"

class B(object):
    def __new__(cls):
        return super(B,cls).__new__(A)
    def __init__(self):
        self.name = "B"

a = A()
b = B()
type(a) == B
type(b) == A
hasattr(a,"name") == False
hasattr(b,"name") == False

The point of the above code snippet: please use __new__ responsibly; everyone you code with will thank you.

__new__ and the new step, in the right hands and for the right task, are powerful tools. Conceptually, they neatly tie together object creation. Practically, they are a blessing when you need them. They also have a dark side. Use them wisely.

Ross Lodge // December 13, 2012

Implementing WS-Security with CXF in a WSDL-First Web Service

security.jpg image

Security is one of the most common requirements for SOAP-based web services. Several standards exist, among them WS-Security and WS-SecurityPolicy. They can be hard to implement, and they are often ignored in favor of a more ad hoc security standard, most often using password authentication in the message itself and SSL for transport layer security.

Trying to implement these standards recently, I had a very hard time finding a consistent and complete guide for doing so, or even a good explanation of the standards themselves. I did find good information on Glen Mazza’s Blog, and my implementation and this tutorial owe much to that information. But that tutorial is based on another one which is in turn based on another one. I found it difficult to filter through the layers to find what was necessary. Thus, I wrote this to provide a more complete and easy to use guide.

This tutorial will try to take you step-by-step through adding a security policy to an existing working web service WSDL as well as adding the additional CXF and Spring configuration necessary to make it work. It will not tell you how to build a CXF web service to start with, or how to configure Spring to make it work.

Tools Needed

  • Maven, 2.2.1 or better
  • JDK 1.6 or better


You can find the code necessary for the tutorial here:

Technologies and Techniques Used

This tutorial uses Apache CXF to provide the backing for a JAX-WS web service which is built WSDL-First.

It uses CXF instead of the Glassfish jaxws-ri implementation or the embedded JDK implementation because I found getting jaxws-ri to do the same thing very cumbersome: it needed to reside in an endorsed standards directory (which puts an installation burden on any system administrators using the product); it requires annotations in the WSDL to work correctly; it requires different annotations for the client and server, so two WSDL versions need maintenance; and it failed with a fatal bug when SOAP faults were returned. CXF exhibited none of these problems, and was easy to integrate with Spring. That said, we generate the JAX-WS and JAXB code with Sun/Oracle’s standard tools to make sure they’re compliant.

The service is built WSDL-first because I believe that this is the most implementation-independent way of producing a SOAP-based web service, and because I think it gives you better interfaces by forcing you to think of them as services, rather than as java methods. It also allows us to clearly specify the security policy, which makes it easier for service consumers to comply.

This example also uses a multi-module Maven project which separates the WSDL, the generated JAX-WS code, and the service implementation/WAR into separate modules, which allows for easy re-use of the WSDL and/or the generated code.

The tutorial example also uses Spring, and the starting code consists of a complete working web service, packaged as a WAR, configured via Spring. Although various techniques are used to construct the configuration, I won’t be explaining the base Maven or Spring configuration in detail.

That said, there are some “tricks” in the code that might cause problems moving this example into an existing web service project:

  • The WSSecurityTutorialJaxWs project uses binding customizations to make the generated code more Java-friendly. These are like any other standard JAX-WS binding customizations, but you should note they exist.
  • The WSSecurityTutorialJaxWs unpacks the WSDL into a temporary directory for generation; it also unpacks the WSDL into the target/classes directory so that it ends up in the final WAR. This is because various tools, including CXF, can load the WSDL from the classpath rather than from the endpoint server, and so it is added to the jar as a convenience.
  • The WSSecurityTutorialWAR module is configured by various files through Spring, using an extension of Spring’s property placeholder functionality which will, if necessary, read properties from system property or JNDI env values. There are three tiers of property configuration files: a default one, a deployment one, and a test one. The intent is for the default one (in src/main/resources) to be rolled into the WAR, for the deployment one to be modified and deployed to the deployment server’s file system, and its location specified via a system property or JNDI value.
  • SLF4J is used for logging, and configuration files in the META-INF directories of the WAR and test classpaths force CXF to use SLF4J as well.
  • The WAR module also uses TestNG instead of JUnit, which allows us to “group” tests. A normal build will run the “unit” and “local-integration” groups. Adding the “integration-test” profile to the build (e.g., ‘mvn clean install -Pintegration-test’) executes the “remote-integration” group and uses a plugin to start Tomcat so that the service can be tested running in a container.

Getting Started

You can download the starting code here. If you unzip that, you should be able to CD, on the command line, into the WSSecurityTutorialParent module and execute “mvn clean install -Pintegration-test” successfully. If not, you have something wrong with your environment, and you will have to diagnose it before you can continue.

Altering the WSDL

To begin, you have to decide what the service’s security policy will actually be, and modify the WSDL to specify it.

Aside from the specifications themselves, there seems to be precious little information about the security specification standard (WS-SecurityPolicy) available. Some information can be found here, here, and here.

Basically to declare a security policy for your web service, you have to define the policy using the http://schemas.xmlsoap.org/ws/2004/09/policy (wsp) and http://schemas.xmlsoap.org/ws/2005/07/securitypolicy (sp) schemas in your WSDL, and then attach the policy declarations to the service, operation, and/or input/output bindings that you want controlled by that policy.

A policy is declared with the “WS-Policy” schema/vocabulary [(https://goo.gl/HBi5DQ), and looks like this, basically:

WS-Policy Declaration

<wsp:Policy wsu:Id="UniqueIdentifier">

Inside the policy declaration, which in itself doesn’t define what the policy is, you need to add security policy declarations. These are defined by the (http://schemas.xmlsoap.org/ws/2005/07/securitypolicy sp) schema, and there are a large number of variations, as defined in the specification linked above.

Basically, for our tutorial, we want to require that the body and custom headers of our messages are signed with a X.509 certificate (for source authentication), and that the body of our messages is encrypted with an X.509 certificate (for message privacy).

A policy to encrypt an input or output message is pretty simple, and looks basically like this:

WS-SecurityPolicy Input/Output Declaration

<wsp:Policy wsu:Id="InputOutputUniqueIdentifier">
                <sp:Body />
                <sp:Body />
                <sp:Header Namespace="http://example.com/tutotial/"/>

This says any operation whose input or output is linked to InputOutputUniqueIdentifier must have an encrypted body and must have a signed body and headers (the signed headers are all in the given namespace).

In theory we could require that the headers also be encrypted, but there is a CXF bug which prevents this from working (CXF-3452; also see related CXF-3453).

We then need to declare, for the entire service binding, how the input/output binding will take place (what kinds of tokens, how the tokens are exchanged, etc.). The options here are complex, and aside from the rather opaque specification, there’s not much explanatory documentation available.

WS-SecurityPolicy Binding Policy Declaration

<wsp:Policy wsu:Id="UniqueBindingPolicyIdentifier">
                            <sp:X509Token sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/AlwaysToRecipient">
                                    <sp:WssX509V3Token11 />
                            <sp:X509Token sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/Never">
                                    <sp:WssX509V3Token11 />
                                    <sp:RequireIssuerSerialReference />
                            <sp:Strict />
                    <sp:IncludeTimestamp />
                    <sp:OnlySignEntireHeadersAndBody />
                            <sp:Basic128 />
                    <sp:EncryptSignature />
                    <sp:MustSupportRefIssuerSerial />

This says an AsymmetricBinding will be used (asymmetric or public/private keys rather than symmetric encryption); the initiator must always include an X.509 token; the return message will also be signed/encrypted with an X.509 certificate, but the token itself will not be included and instead an issuer serial # reference will be included. Additionally, strict header layout is used; a timestamp is included and messages will be rejected if the timestamp is too far out-of-date (to avoid replay attacks); only complete headers and bodies must be signed rather than child elements of either; the “Basic128” algorithm suite is used; the signature itself must be encrypted; and the caller must support issuer serial references.

If we wanted to include a further layer of security for message transport, or wanted to use transport encryption instead of message-level encryption, we could add something like:

HTTPS Transport Policy Declaration

        <sp:HttpsToken />

So to implement these assertions, you should do the following:

Add to the attributes of your wsdl:definitions element:

  • xmlns:wsu=”http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd”
  • xmlns:wsp=”http://schemas.xmlsoap.org/ws/2004/09/policy”
  • xmlns:sp=”http://schemas.xmlsoap.org/ws/2005/07/securitypolicy”

I also added, for editor convenience:

Add the complete set of declarations to your WSDL (I added them as the last elements in the WSDL):

Complete Tutorial Binding Assertion

<wsp:Policy wsu:Id="TutorialBindingPolicy">
                            <sp:X509Token sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/AlwaysToRecipient">
                                    <sp:WssX509V3Token11 />
                            <sp:X509Token sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy/IncludeToken/Never">
                                    <sp:WssX509V3Token11 />
                                    <sp:RequireIssuerSerialReference />
                            <sp:Strict />
                    <sp:IncludeTimestamp />
                    <sp:OnlySignEntireHeadersAndBody />
                            <sp:Basic128 />
                    <sp:EncryptSignature />
                    <sp:MustSupportRefIssuerSerial />
<wsp:Policy wsu:Id="TutorialInputBindingPolicy">
                <sp:Body />
                <sp:Body />
                <sp:Header Namespace="http://example.com/tutotial/"/>
<wsp:Policy wsu:Id="TutorialOutputBindingPolicy">
                <sp:Body />
                <sp:Body />
                <sp:Header Namespace="http://example.com/tutotial/"/>

You then must “reference” the policy declarations where you want them used. To each wsdl:binding element where the binding policy should apply, add:

Binding Policy Reference

<wsp:PolicyReference URI="#TutorialBindingPolicy" />

For each input element where the policy should apply, add:

Input Policy Reference

<wsp:PolicyReference URI="#TutorialInputBindingPolicy"/>

For each output element where the policy should apply, add:

Output Policy Reference

<wsp:PolicyReference URI="#TutorialOutputBindingPolicy"/>

So, for instance, the tutorial’s code:

Complete Tutorial Binding

<wsdl:binding name="TutorialWebServiceSOAP" type="tns:TutorialWebService">
    <wsp:PolicyReference URI="#TutorialBindingPolicy" />
    <soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http" />
    <wsdl:operation name="sendTutorialMessage">
        <soap:operation soapAction="http://example.com/tutotial/sendTutorialMessage" />
            <wsp:PolicyReference URI="#TutorialInputBindingPolicy"/>
            <soap:body use="literal" parts="parameters" />
            <soap:header use="literal" part="source" message="tns:TutorialRequest"/>
            <wsp:PolicyReference URI="#TutorialOutputBindingPolicy"/>
            <soap:body use="literal" parts="response"/>
            <soap:header use="literal" part="acknowledgment" message="tns:TutorialResponse"/>
        <soap:address location="http://localhost/" />

Implementing the Binding

Now you need to get CXF to read, enforce, and support the binding on the server and client. In our example, the server is the end-result WAR of the WAR module, and the client example is the integration test cases in that module.


To do this, you will need to add additional CXF dependencies: one to support WS-Policy, one to support WS-Security, and one as an encryption provider.

In the tutorial example, the Parent module controls the versions, exclusions, etc. of all dependencies, so to the dependencyManagement element of the Parent POM, add:

New Dependency Management Entries


And to the WAR module’s POM:

New WAR Dependency Entries


New Spring Configuration Files

These new dependencies allow CXF to process the policy declarations and the new headers. To activate them, you need to load the CXF Spring configuration files for those new CXF modules. So, to the WAR’s web.xml you should add, right under the existing classpath:META-INF/cxf/cxf-servlet.xml entry:

New Spring Files


And to your client, right after classpath*:/META-INF/cxf/cxf-extension-http.xml, you should add the same two XML files. For the tutorial, this is done in the ContextConfigurations attribute of TutorialWebServiceTest.java.

I found, when experimenting with this, that the CXF configuration files are sensitive to the order in which they are loaded by Spring – so the order specified above for the two new files, and where they are placed relative to existing CXF configurations, seems to be important.

Generate Certificates

Unless you have existing X.509 certificates for your client and server, you are going to have to generate new ones. Of course, for a production scenario, you should have issuer-signed certificates from a recognized authority such as Verisign, but for testing and development, and for this tutorial, self-signed certificates can be used. You can use the Java keytool for this; you will need to create two keystores (client and server), generate a client key and a server key, export the public keys, and import the public keys into the opposite number’s keystore. A script to do this is here:



# Set the values we'll use for the generation
read -p"Server Key Alias?" serverkeyalias
read -p"Server Key Password?" serverkeypassword
read -p"Server Keystore Password?" serverstorepassword
read -p"Server Keystore File Name?" serverkeystorename

read -p"Client Key Alias?" clientkeyalias
read -p"Client Key Password?" clientkeypassword
read -p"Client Keystore Password?" clientstorepassword
read -p"Client Keystore File Name?" clientkeystorename

# Generate the server and client keys
keytool -genkey -alias $serverkeyalias -keyalg RSA -sigalg SHA1withRSA -keypass $serverkeypassword -storepass $serverstorepassword -keystore $serverkeystorename -dname "cn=localhost"
keytool -genkey -alias $clientkeyalias -keyalg RSA -sigalg SHA1withRSA -keypass $clientkeypassword -storepass $clientstorepassword -keystore $clientkeystorename -dname "cn=clientuser"

# Export the client key and import it to the server keystore
keytool -export -rfc -keystore $clientkeystorename -storepass $clientstorepassword -alias $clientkeyalias -file $clientkeyalias.cer
keytool -import -trustcacerts -keystore $serverkeystorename -storepass $serverstorepassword -alias $clientkeyalias -file $clientkeyalias.cer -noprompt
rm $clientkeyalias.cer

# Export the server key and import it to the client keystore
keytool -export -rfc -keystore $serverkeystorename -storepass $serverstorepassword -alias $serverkeyalias -file $serverkeyalias.cer
keytool -import -trustcacerts -keystore $clientkeystorename -storepass $clientstorepassword -alias $serverkeyalias -file $serverkeyalias.cer -noprompt
rm $serverkeyalias.cer

Of course you should note or remember the necessary passwords; you will need them later.

These keystores need to be placed where the server or client can read them. For the tutorial, the client keystore goes into src/test/resources, and the server one goes into src/main/springconfig/local. You will later need to tell the client and server, via Spring properties, where these are.

Create a CallbackHandler

To get passwords for specific keys, CXF uses an implementation of javax.security.auth.callback.CallbackHandler. If you don’t already have one, you will need to create one. Create a new java class that implements javax.security.auth.callback.CallbackHandler that handles callbacks of type org.apache.ws.security.WSPasswordCallback. For example:


package com.example.tutorial.ws.security;

import java.io.IOException;
import java.util.HashMap;
import java.util.Map;

import javax.security.auth.callback.Callback;
import javax.security.auth.callback.CallbackHandler;
import javax.security.auth.callback.UnsupportedCallbackException;

import org.apache.ws.security.WSPasswordCallback;

 * Really callback for key passwords.  Configure it with a map
 * of key-alias-to-password mappings.  Obviously this could
 * be extended to encrypt or obfuscate these passwords if desired.
public class KeystorePasswordCallback implements CallbackHandler

    private Map<String, String> passwords = new HashMap<String, String>();

     * {@inheritDoc}
     * @see javax.security.auth.callback.CallbackHandler#handle(javax.security.auth.callback.Callback[])
    public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException
        for (Callback callback : callbacks)
            if (callback instanceof WSPasswordCallback)
                WSPasswordCallback pc = (WSPasswordCallback)callback;

                String pass = passwords.get(pc.getIdentifier());
                if (pass != null)

     * @return the passwords
    public Map<String, String> getPasswords()
        return passwords;

     * @param passwords the passwords to set
    public void setPasswords(Map<String, String> passwords)
        this.passwords = passwords;


Configure the Service

Next, you will need to configure the web service to handle WS-Security. Assuming you already have a CXF service defined in a Spring configuration file, you need to add:

  • The CallbackHandler you just created, with necessary passwords
  • A series of properties for the keystore to be used by the service
  • The key alias to be used for signing

To do this to the tutorial code, find cxf-service-config.xml, and add:

cxf-service-config.xml Additions

<bean id="keystorePasswordCallback" class="com.example.tutorial.ws.security.KeystorePasswordCallback">
    <property name="passwords">
            <entry key="${wss.keyAlias}" value="${wss.keyPassword}"/>

<util:properties id="keystoreProperties">
    <prop key="org.apache.ws.security.crypto.provider">org.apache.ws.security.components.crypto.Merlin</prop>
    <prop key="org.apache.ws.security.crypto.merlin.keystore.type">${wss.keystoreType}</prop>
    <prop key="org.apache.ws.security.crypto.merlin.keystore.password">${wss.keystorePassword}</prop>
    <prop key="org.apache.ws.security.crypto.merlin.keystore.alias">${wss.keyAlias}</prop>
    <prop key="org.apache.ws.security.crypto.merlin.file">${wss.keystorePath}</prop>

These define a password callback, with a key alias entry and password, and the properties to manage the keystore. Note that all these entries are defined Spring property tokens; you will define these soon.

And to the existing jaxws:endpoint/jaxws:properties in that file, add:

cxf-service-config.xml Additions

<entry key="ws-security.callback-handler" value-ref="keystorePasswordCallback"/>
<entry key="ws-security.encryption.properties" value-ref="keystoreProperties"/>
<entry key="ws-security.signature.properties" value-ref="keystoreProperties"/>
<entry key="ws-security.encryption.username" value="useReqSigCert"/>

The entry useReqSigCert tells CXF to “encrypt the response with the same certificate that signed the request”.

In this example we also use the same keystore properties for encryption and signature; if you have separate key and trust stores, you can create separate properties with different values.

Because most of the entries above are Spring property tokens, we need to enter the correct values into the property file that’s being used by Spring to store these values. For the tutorial, add to TutorialDeploymentPropertyPlaceholders.properties:


wss.keyAlias=the key alias you generated the service key with
wss.keyPassword=the key password you generated the service key with
wss.keystorePassword=the key store password you generated the service key with
wss.keystorePath=${configDirectory}/the name you gave the service keystore

Configure the Client

The client configuration is essentially the same, with very minor changes. For the tutorial:

war-spring-test.xml Additions

<bean id="keystorePasswordCallback" class="com.example.tutorial.ws.security.KeystorePasswordCallback">
    <property name="passwords">
            <entry key="${wss.keyAlias}" value="${wss.keyPassword}"/>

<util:properties id="keystoreProperties">
    <prop key="org.apache.ws.security.crypto.provider">org.apache.ws.security.components.crypto.Merlin</prop>
    <prop key="org.apache.ws.security.crypto.merlin.keystore.type">${wss.keystoreType}</prop>
    <prop key="org.apache.ws.security.crypto.merlin.keystore.password">${wss.keystorePassword}</prop>
    <prop key="org.apache.ws.security.crypto.merlin.keystore.alias">${wss.keyAlias}</prop>
    <prop key="org.apache.ws.security.crypto.merlin.file">${wss.keystorePath}</prop>


<jaxws:client ...

        <entry key="ws-security.callback-handler" value-ref="keystorePasswordCallback"/>        
        <entry key="ws-security.encryption.properties" value-ref="keystoreProperties"/>
        <entry key="ws-security.signature.properties" value-ref="keystoreProperties"/>
        <entry key="ws-security.encryption.username" value="${serverKeyAlias}"/>

Note that this one uses a specific key alias for the “username”.

Then to the properties file:


wss.keyAlias=the alias you used to generate the client key
wss.keyPassword=the key password you used to generate the client key
wss.keystorePassword=the store password you used to generate the client key
wss.keystorePath=${configDirectory}/the name you gave to the client keystore
wss.serverKeyAlias=the server key alias you used to generate the server key

Run and Test

This (should) be it. You should be able now to run the service and test its encryption functionality. The tutorial code has logging interceptors turned on so you can see the encrypted and signed messages.

Notes about the Encrypted Messages

Hopefully, if everything works, the exchanged messages should look much like this:

An Encrypted Message

            message-identifier="SYSTEM FAILURE"

The most notable change between this and a “normal” SOAP message is the wsse:Security header and the blocks of xenc:CipherData. I found several things worth noting, because nothing I had read explained how this worked:

  • The wsse:Security element in the header contains the information needed to decrypt and verify the message.
  • The wsse:BinarySecurityToken element contains the actual token data
  • The wsu:Timestamp element contains our requested timestamp, in this case expiring in 5 minutes. Messages sent after the expiration date should fail.
  • The xenc:EncryptedKey element contains information about the key that was actually used to encrypt the message. It contains the token reference for the encrypting key, which in the case of the above message is the public key of the server. It also contains a xenc:CipherValue element which, as I understand it, is a 128-Bit symmetric key, encrypted with the public key of the server. This 128-Bit key is used to encrypt the message; only the randomly generated symmetric key included here is used to encrypt the message. This provides the speed of symmetric encryption coupled with the security of PPK encryption by exchanging random, single-use keys encrypted with the public key of the recipient.
  • It also contains a reference list of elements which it should be used to decrypt.
  • The first xenc:EncryptedData element contains the signature for the message, encrypted with the given symmetric key.
  • The second xenc:EncryptedData contains the body of the message, encrypted with the given symmetric key.

Completed Source

You can download the completed tutorial source here.

PRWEB PRWEB // March 27, 2012

Concentric Sky and U.N. Release UN CountryStats for iPhone and iPad

Web firm lands big contract photo image

Concentric Sky and the United Nations today released UN CountryStats, a free data visualization tool for iPhone and iPad. The application charts, graphs and compares the United Nations statistical data of 216 countries.

UN CountryStats includes information from the United Nations World Statistics Pocketbook 2010. With this interactive tool, users can access dozens of different statistics on a country’s society, economy, environment and more. Graphs show a statistical comparison of up to three nations, and can be saved as favorites for quick future reference. Much of the data includes records for past years, providing an easy way to visualize changes over time.

Cale Bruckner, Concentric Sky’s vice president of technology, said UN CountryStats is a great example of an application that has successfully added an extra dimension to data that was previously published in a very two-dimensional way.

“Having the ability quickly compare over 200 countries on a wide range of indicators makes the data a lot more interesting and attracts a new audience,” Bruckner said.

Bruckner and Project Designer Adam Barton worked with Steve Slawsky, Manager of Digital Development and Systems for United Nations Publications. Slawsky’s team provided the concept and data, which Concentric Sky used to develop the mobile application.

The iTunes App Store recently featured UN CountryStats in its “New and Noteworthy” list. A leading iOS developer, Concentric Sky has created mobile applications for Encyclopedia Britannica (Encyclopedia and Britannica Kids series), the World Bank (World Bank: Doing Business at a Glance) and National Geographic (GeoBee).

View original article

All Things Digital // September 28, 2011

Encyclopaedia Britannica Now Fits Into an App

Encyclopedia Britannica now fits into an app image

The Encyclopaedia Britannica has been the most prestigious general encyclopedia in the English language for what seems like forever. But it has always been expensive, and a bit stodgy. Today, when people need to look up information, they’re likely to just do a Web search, or to consult the free, community-written, online encyclopedia, Wikipedia.

The Britannica, however, isn’t going away, or ignoring the digital world. It has long had a paid website. When it comes to school research, it is often trusted by many teachers and parents over less rigorously vetted sources. And now, it is about to launch a slick iPad app containing its entire content at a greatly reduced price: $2 a month, or $24 a year, versus $70 a year for the Web version and about $1,400 for the venerable print version. (People who pay for the Web version also get access to the iPad app at no extra cost.)

I’ve been testing this new iPad app, and I like it. It is much cleaner and more attractive than the cluttered Britannica website and sports some nice features, including a dynamic “link map” showing the relationship between topics in a visual format. Unlike the Web version, it is free of ads. The app is expected to be available in a couple of weeks.

Whether or not this new Britannica app is for you will be a personal decision based on what you’re looking for; and how much you value an edited, highly curated source over the broader, more easily updated, but crowd-sourced, Wikipedia, which also is available via a variety of iPad apps. Of course, many subscribers to Britannica will still use Wikipedia or other Web sources for research.

Since I don’t presume to be an academic expert, for this review I focused mostly on the experience of using the forthcoming Britannica app, rather than attempting to analyze its contents. Still, some content comparisons with Wikipedia are useful to keep in mind.

The Britannica app contains 140,000 articles. Wikipedia has about 3.7 million. Many contemporary topics, like the latest in pop culture, or some current public figures, are included in Wikipedia, but missing from Britannica. For instance, the widely praised and popular TV show “Modern Family” gets lavish coverage in Wikipedia, but doesn’t make the cut in Britannica.

On the other hand, there are some topics in Britannica’s smaller collection of articles that I couldn’t locate in Wikipedia. One example: an article on Suzanne Douvillier, described by Britannica as “probably the first woman choreographer in America.” And Britannica has many articles written by credentialed academics, journalists and other experts, while it can be difficult to discern the credentials, or even the real name, of a Wikipedia contributor.

The forthcoming Britannica iPad app, which also is slated to appear later in an iPhone and Android version, is handsome and colorful. It’s free to download and offers a small amount of free content, even for nonsubscribers. But the vast majority of its content is accessible only to subscribers.

Perhaps the coolest feature is the link map, triggered from an icon at the top of each article page. This generates a spider web of icons representing other articles related to the one you were reading.

For instance, the link map for the article on Apple Chairman Steve Jobs spawns tendrils leading to articles on things like “personal computer” and “software.” If you then tap on say, “software,” more tendrils appear, leading to topics like “Bill Gates” or “open source.” You can tap on any of the icons to read the underlying article.

This kind of visual array of related items isn’t a new idea. In fact, there is an iPad app called WikiNodes which does something similar for Wikipedia content. But Britannica has implemented the idea nicely.

The home page of the Britannica app features a large daily color photo with an accompanying free article related to the picture. For instance, a photo of the Croatian National Theatre links to a free article on the country’s capital, Zagreb.

Also on the home page is an event that occurred that day, such as the birth of the actress Brigitte Bardot, which can be tapped to reveal an article about her.

In addition, the home page features a large search box for looking up topics and three links in a section labeled “Browse.” One, called “A-Z,” allows you to just leaf through the Britannica alphabetically. Another, called Top Articles, includes 100 free articles on popular topics like the “Amazon River,” “the Beatles,” “the French Revolution” or “William Shakespeare.” The third is an expanded list of events and births that occurred on the day you are using the app.

Beyond the couple of free home page articles and the 100 free top articles, nonsubscribers will only see the first 100 words or so of each article.

When viewing an article, you can read through it by merely swiping from page to page, a process I found quick and reliable. A progress bar and page number shows where you are in an article, but there is no bookmark feature. The font size can be increased or decreased.

To the left of each article, there are icons that allow you to save it for offline reading, mark it as a favorite, or email a link to the Web version of that article, which can be read even by a nonsubscriber. A section of the app called My Britannica lists all your saved, favorite and recently viewed articles. I found all of this easy to discover and use.

At the top of each article, there are icons that, when tapped, display the table of contents for the article, and a gallery of images from the article, expanded to a larger size.

I found some things missing from the app, including some features that are present in Britannica’s website.

The most glaring omission is the lack of links to related sources outside the encyclopedia. There are also no videos in the app. And you can’t print articles from the app, though Britannica says it plans to add printing and the ability to post references to articles to social networks.

Especially for students, or anyone who values what Britannica has to offer, I found the new Britannica iPad app to be a pleasing, easy way to navigate through a large body of knowledge.

By Walt Mossberg

View original article

Ross Lodge // March 31, 2011

Emulating JRE Classes In GWT

blog-image-6.jpg image

The Google Web Toolkit (GWT) SDK provides a set of core Java APIs and Widgets - speeding the development of powerful AJAX applications in Java that can then be compiled to highly optimized JavaScript that runs across all browsers, including mobile browsers for Android and iOS.

However, when working with GWT, you quickly find that the toolkit’s implementation of the Java API’s are incomplete, and that using types that Google hasn’t provided as translatable will result in a GWT compiler error.

We wanted to be able to use the java.net.URI and java.util.UUID classes in our client-side code, neither of which are supported by GWT. This tutorial describes a method for implementing client-side versions of JDK classes that GWT doesn’t support. Fortunately, GWT provides support for overriding one package implementation with another.

There have been some attempts to implement more of the JDK; for example see GWTx, but they are quite incomplete.

It is extremely useful, then, to have a clear technique for creating client-side versions of some JDK classes that are unavailable. GWT helpfully provides a mechanism for doing this (look down under “Overriding one package implementation with another”), but doesn’t tell you much about how to use it.

I went looking for some examples of how to use the “super-source” XML tag to create some client-side implementations of some java types, and found a couple of incomplete or confusing tutorials:

But, none of these told me exactly how I was to accomplish this.

In addition, we had another problem: we needed to be able to pass the URI and UUID classes back and forth between the client javascript and GWT-implemented services on the server, and none of the above blog posts gave a hint as to how to make the Java and Javascript versions of the classes mutually serializable (GWT uses the standard JDK implementation on the server, but the Javascript override classes on the client).

There’s a mechanism for this, as well, but again I couldn’t find good Google documentation on it. I did find How to use a CustomFieldSerializer in GWT, but it deals with entirely custom classes and not classes meant to act as JDK API classes.

We eventually figured this out and I thought I’d put this tutorial together in hopes that others can do the same if necessary.

Tools Needed

  • Maven, 2.2.1 or greater
  • JDK 1.6 or better
  • Eclipse (optional)

These can be downloaded from the websites of the respective projects.

Project Setup

If you don’t have an existing project, you can use the gwt-maven-plugin archetype to create one:

Emulating JRE Classes In GWT image

It will prompt you for group id, artifact id, version, and package names; for the example I used “com.example.gwt”, “SuperSource”, “1.0.0-SNAPSHOT”, and “com.example.gwt”.

You will need a working maven pom.xml file that includes the necessary GWT dependencies and the gwt-maven-plugin; the archetype may give you this, although it doesn’t use the latest GWT versions and in my experience produces some errors. The one I used, for example, looks like this:

Emulating JRE Classes In GWT image

Create source directories: src/main/java, src/main/resources, and src/main/webapp. You will need to provide the necessary html, css, and web.xml files for your implementation. The example code linked above includes sample sources for these taken directly from the archetype that the gwt-maven-plugin provides.

Create server-side service interface and implementation, and an EntryPoint class. The ones in the linked source are adapted from the “Greeting Service” that comes with the archetype.

Create a JDK-Emulation Module

Create a new module at the same level, in the resources directory, as your existing GWT module. In the example source, I called it “SuperSourceJre.gwt.xml”, and placed it in the src/main/resources/com/example/gwt directory next to the one created by the archetype. It should look like this:

Emulating JRE Classes In GWT image

The path in the super-source tag is arbitrary, but must match the the name of a directory directly under the package or directory where you created the new file; obviously you can also set the “rename-to” value to whatever you would like. What GWT does is take the package specified as source-path, and removes it from the front of the path when compiling anything under that directory. So files under “com/example/gwt/jre/java/net” will be compiled as if they belonged to the package “java.net”. This allows you to create classes that get “renamed” to classes in the core JDK. Note that the sub-path then must match the package of the class in the JDK.

Import this new module in your existing module(s). For example, in the existing “SuperSourceTest.gwt.xml” file, I added:

Emulating JRE Classes In GWT image

Create the Emulated Classes

In the attached example source code, I emulate both java.net.URI and java.util.UUID, and I also include some fancy JSNI native code to allow me to generate UUID’s on the client side if necessary. Of course, your needs might be different. The key is that only the methods you implement in your emulation classes are available to the client. Although it will appear as though your java code compiles when you are writing client-side code, it will fail GWT compilation.

The classes you need to emulate should be created in a package-dependent path in the resources directory under where you created your new emulation module. For example, I created the classes:

  • src/main/resources/com/example/gwt/jre/java/net/URI.java
  • src/main/resources/com/example/gwt/jre/java/net/URISyntaxException.java
  • src/main/resources/com/example/gwt/jre/java/util/UUID.java

(The exception is created because the URI constructor throws URISyntaxException, so it also must be emulated.) Note that I put these in the src/main/resources directories instead of src/main/java: this is because Eclipse or other IDE’s and compilers will refuse to compile them as java, since they have invalid package specifications, so I put them in as “resources” so they end up in source and classes directories but nothing but GWT attempts to compile them.

Create the source code for these classes. Ideally, you would implement the entire functionality of the original JDK class yourself, but you at least need to emulate a default constructor and the minimum logic necessary for your client. The signatures of the methods you implement should mirror exactly the ones they are replacing in the JDK itself.

Here are the implementations I used:

Emulating JRE Classes In GWT image
Emulating JRE Classes In GWT image
Emulating JRE Classes In GWT image

At this point, GWT will happily compile these classes, but they can only be used on the client side (server-side code will use the original JDK classes). They can’t be passed as service arguments between client and server because GWT will see them as different because their serialized signatures are different.

Implementing Serialization

GWT allows you to specify custom serialization for your classes. The method is pretty straightforward: you need to create a class in the same package as the class you want to serialize named [Class to Serialize]_CustomFieldSerializer. The name and package must be exact, or GWT won’t be able to find the serialization class.

In the serializer class, you must implement two methods:

  • * serialize
  • * deserialize

You may also implement “instantiate” if default constructor instantiation is not adequate.

For example:

Emulating JRE Classes In GWT image

Because we want these serializers available on both the server and client side, they should be placed in src/main/java/… so they will be compiled by both GWT and the java compiler.

This is pretty straightforward, and works fine on the client. But there’s a major problem: the classes we want to emulate are in JDK core packages (java.net, java.util). On the server, the classloader will refuse to load any custom class in core packages (packages beginning with java or javax) as part of the JDK’s core sandboxing. So we can’t place our serializers in the right package.

Apparently this is a problem Google had as well, because their own emulation classes must also often have custom serializers. So they have created a “magic value” package name that their internal API checks for serializers as well as the raw package name. This value is “com.google.gwt.user.client.rpc.core”. The raw package is appended to this, so if GWT were looking for a serializer for “java.net.URI”, they would first check for “java.net.URI_CustomFieldSerializer” and, if none was found, check for “com.google.gwt.user.client.rpc.core.java.net.URI_CustomFieldSerializer” So if we place our serializers in such a package, GWT will find them automatically. Of course, this is internal GWT API to “use at your own risk”, but we haven’t found another way around this problem yet.

So to serialize our own URI and UUID classes, we build custom serializers as (note the src/main/java location and the special package):

  • src/main/java/com/google/gwt/user/client/rpc/core/java/net/URI_CustomFieldSerializer.java
  • src/main/java/com/google/gwt/user/client/rpc/core/java/util/UUID_CustomFieldSerializer.java

These look like:

Emulating JRE Classes In GWT image
Emulating JRE Classes In GWT image

The code implementation for both of these is pretty straightforward: serialize the object using toString, and instantiate it from the string.

Utilizing the Classes

Obviously you will have your own specific uses of these classes. To test them, I created a simple DTO bean in the archetype’s shared package:

Emulating JRE Classes In GWT image

I then modified the archetype’s GreetingService and related implementation, changing greetServer to look like this:

Emulating JRE Classes In GWT image

The implementation was changed similarly, and echoes both echoes back incoming URI and UUID entries and generates some new random ones. This lets me make sure that the equals and hashcode implementations work reasonably well.

Emulating JRE Classes In GWT image

Then I modified the EntryPoint to use the new method and display the result. If you’re using gwt-maven-plugin to generate the async interfaces you’ll have to do a maven compile to have those interfaces generated before the code below will compile in Eclipse.

Emulating JRE Classes In GWT image

You should then compile this using mvn clean install gwt:run to make sure it works correctly in both hosted and production modes.

The Register-Guard // June 15, 2010

Tech Company Aiming High

Cattle in a field

A small Eugene tech firm continues to rub elbows with some big-name national clients.

Concentric Sky, a 5-year-old Web development shop, recently completed a mobile computing application for National Geographic’s GeoBee, which is like a geography spelling bee for students grades 4 through 8.

The “app” for the iPhone, and larger format iPad, tests users’ geography smarts in three ways. It throws out multiple choice questions from a library of more than 1,300 National Geographic GeoBee questions. It challenges users to locate spots on an interactive map, drawing from a catalog of more than 1,000 locations. Plus, a bonus round challenges users, cued by a single photo, to find the place in the photo on an interactive map.

The app, available on iTunes, costs $1.99 for the iPhone and $3.99 for the iPad.

The GeoBee project could lead to more work with National Geographic, said Concentric Sky’s founder and president, Wayne Skipper.

“We’re talking about several other products with them,” he said, declining to go into specifics about the products or when they might be released.

Kevin Yam, who is director of Mobile and Interactive Platforms at National Geographic, said the organization is interested in doing more projects with Concentric Sky.

“They did great work for us,” Yam said. “We would definitely be interested in working with them again as different projects arise. It’s like if you buy a car you like, you’ll go back to that dealer again.”

Yam said that another company National Geographic had worked with first brought Concentric Sky to the organization’s attention.

“We were familiar with the work they did … they had already put out a game called GeoTap,” he said. “They obviously had a great interest in geography, and National Geographic was founded in the 1800s to increase and diffuse geographic knowledge. It’s kind of in our DNA (as well as) our name.”

National Geographic is the second high-profile client Concentric Sky has partnered with this year. In January, the company announced it had teamed up with Encyclopaedia Britannica to initially introduce 10 to 12 applications, such as “This Day in History” or “Quote of the Day,” for iPhones, Blackberries and Android-based “smart phones.”

When Skipper founded Concentric Sky as a personal consultancy in 2005, he focused on Web development for a large educational client. Now the 45-person firm has dozens of clients, nearly 50 mobile applications for the iPhone, as well as applications for Palm, Blackberry and Android-based devices.

And it is in the process of expanding, Skipper said, with 10 positions to fill. “One of our primary challenges is a lack of appropriately skilled technologists,” Skipper said.

For competitive reasons, Skipper declines to reveal the company’s annual revenues, or how payment is structured in individual contracts.

In general, he said, the company makes money on mobile applications by receiving a negotiated amount for the work, a share of royalties, or involvement in some other partnership arrangement.

Mobile development is a fast-growing piece of Concentric Sky’s business, Skipper said.

“We’re rapidly becoming a go-to shop for large brands around the world,” he said. “We’re approached quite regularly by these companies to do work with them, and this (the National Geographic partnership) will just add credibility to our work in this space.”

Skipper said he thinks mobile development is a market that’s here to stay.

“You could almost think of it as the new Internet,” he said.

Concentric Sky also is raising its profile nationally, and even internationally, in other ways.

The Internet Engineering Task Force, a loosely, self-organized group of people that develops Internet protocols, recently chose Concentric Sky as one of its core Web developers, Skipper said.

“We went up against vendors from around the world and were one of only three selected,” he said, adding that the others are in Denmark and Spain.

“Being the only U.S. vendor is a great honor, and I think it shows off the capabilities that can be found at small companies in small towns like Eugene, Oregon,” Skipper said.

At Google’s request, Skipper also was an expert witness in the Federal Trade Commission’s investigation into Google’s acquisition of Admob — one of the world’s largest mobile advertising networks.

Business Editor Ilene Aleshire contributed to this story. By Sherri Buri McDonald Appeared in print in The Register-Guard: Tuesday, June 15, 2010, Page B4

View original article