Drupal Planet

Subscribe to Drupal Planet feed
Drupal.org - aggregated feeds in category Planet Drupal
Updated: 23 hours 28 sec ago

Agiledrop.com Blog: Top Drupal blog posts from November 2018

December 20, 2018 - 18:21

To continue with our tradition of compiling the top blog posts involving Drupal from the previous month, we’ve prepared a list of blog posts from November 2018 that stuck with us the most.

READ MORE

Droptica: It’s been a year with Droopler!

December 20, 2018 - 16:00
Soon, we will be celebrating the first anniversary of the day Droopler – our open Drupal 8 distribution – was released. It is a perfect time for some summaries and plans for the future. In this article, I’m going to show you how Droopler works and what awaits its users in the upcoming release.

Jeff Geerling's Blog: Deploying an Acquia BLT Drupal 8 site to Kubernetes

December 20, 2018 - 05:28

Wait... what? If you're reading the title of this post, and are familiar with Acquia BLT, you might be wondering:

  • Why are you using Acquia BLT with a project that's not running in Acquia Cloud?
  • You can deploy a project built with Acquia BLT to Kubernetes?
  • Don't you, like, have to use Docker instead of Drupal VM? And aren't you [Jeff Geerling] the maintainer of Drupal VM?

Well, the answers are pretty simple:

Aten Design Group: GraphQL with Drupal: Getting Started

December 20, 2018 - 01:29

Decoupling Drupal is a popular topic these days. We’ve recently posted about connecting Drupal with Gatsby, a subject that continues to circulate around the Aten office. There are a number of great reasons to treat your CMS as an API. You can leverage the content modeling powers of Drupal and pull that content into your static site, your javascript application, or even a mobile app. But how to get started?

In this post I will first go over some basics about GraphQL and how it compares to REST. Next, I will explain how to install the GraphQL module on your Drupal site and how to use the GraphiQL explorer to begin writing queries. Feel free to skip the intro if you just need to know how to install the module and get started.

A Brief Introduction to GraphQL

Drupal is deep in development on an API First Initiative, and the core team is working on getting json:api into core. This exposes Drupal's content via a consistent, standardized solution which has many advantages and responds to REST requests.

Recently the JavaScript community has become enamored with GraphQL, a language for querying databases which is touted as an alternative to REST for communicating with an API.

Developed by Facebook, GraphQL is now used across the web from the latest API of Github to the New York Times redesign.

GraphQL opens up APIs in a way that traditional REST endpoints cannot. Rather than exposing individual resources with fixed data structures and links between resources, GraphQL gives developers a way to request any selection of data they need. Multiple resources on the server side can be queried at once on the client side, combining different pieces of data into one query and making the job of the front-end developer easier.

Why is GraphQL Good for Drupal?

GraphQL is an excellent fit for Drupal sites, which are made up of entities that have data stored as fields. Some of these fields could store relationships to other entities. For example, an article could have an author field which links to a user.

The Limitations of REST

Using a REST API with that example, you might query for “Articles”. This returns a list of article content including an author user id. But to get that author’s content you might need to do a follow-up query per user ID to get that author’s info, then stitch together that article with the parts of the author you care about. You may have only wanted the article title, link and the author name and email. But if the API is not well designed this could require several calls to the server which returned way more info that you wanted. Perhaps including the article publish date, it’s uuid, maybe the full content text as well. This problem of “overfetching” and “underfetching” is not an endemic fault with all REST based APIs. It’s worth mentioning that json:api has its own solutions for this specific example, using sparse fieldsets and includes.

Streamlining with GraphQL

With GraphQL, your query can request just the fields needed from the Article. Because of this flexibility, you craft the query as you want it, listing exactly the fields you need (Example: the title and URL, then it traverses the relationship to the user, grabbing the name and email address). It also makes it simple to restructure the object you want back; starting with the author then getting a reverse reference to Articles. Just by rewriting the query you can change the display from an article teaser to a user with a list of their articles.

Either of these queries can be written, fields may be added or removed from the result, and all of this without writing any code on the backend or any custom controllers.

This is all made possible by the GraphQL module, which exposes every entity in Drupal from pages to users to custom data defined in modules, as a GraphQL schema.

Installing GraphQL for Drupal

If you want to get started with GraphQL and Drupal, the process requires little configuration.

  1. Install the module with Composer, since it depends on a vendor library GraphQL-php If you're using a Composer based Drupal install use the command:
    composer require drupal/graphql
    to install the module and its dependencies.
  2. Enable the module; it will generate a GraphQL schema for your site which you can immediately explore.
Example Queries with GraphiQL

Now that you have GraphQL installed, what can you do? How do you begin to write queries to explore your site’s content? One of the most compelling tools built around GraphQL is the explorer, called GraphiQL. This is included in the installation of the Drupal GraphQL module. Visit it at:

/graphql/explorer

The page is divided into left and right sides. At the left you can write queries. Running a query with the button in the top left will display the response on the right pane.

Write a basic query on the left side, hit the play button to see the results on the right.

As you write a query, GraphiQL will try to autocomplete to help you along.

As you type, GraphiQL will try to autocomplete With entities, you can hit play to have it fill in all the default properites.

You can also dive into the live documentation in the far right pane. You'll see queries for your content types, the syntax for selecting fields as well as options for filtering or sorting.

Since the schema is self documenting, you can explore the options available in your site.

The documentation here uses autocomplete as well. You can type the name of an entity or content type to see what options are available.

Add additional filter conditions to your query.

Filters are condition groups, in the above example I am filtering by the "article" content type.

In the previous example I am just getting generic properties of all nodes, like entityLabel. However, if I am filtering by the "Article" type, I would want access to fields specific to Articles. By defining those fields in a "fragment", I can substitute the fragment right into my query in place of those individual defaults.

Use fragments to set bundle specific fields.

Because my author field is an entity reference, you'll see the syntax is similar to the nodes above. Start with entities, then list the fields on that entity you want to display. This would be an opportunity to use another fragment.

Now that the query is displaying results how I want, I can add another filter to show different content. In this case; a list of unpublished content.

Add another filter to see different results.

Instead of showing a list of articles with their user, I could rearrange this query to get all the articles for a given user.

Display reverse references with the same fragment.

I can reuse the same fragment to get the Article exactly as I had before, or edit that fragment to remove just the user info. The nodeQuery just changes to a userById which takes an id similar to how the nodeQuery can take a filter. Notice the reverseFieldAuthorNode. This allows us to get any content that references the user.

Up Next: Building a Simple GraphQL App

If you’re new to GraphQL, spend a little time learning how the query language works by practicing in the GraphiQL Explorer. In the next part of this post I will go over some more query examples, write a simple app with create-react-app and apollo, and explain how GraphQL can create and update content by writing a mutation plugin.

Lullabot: A Toolset For Enterprise Content Inventories

December 20, 2018 - 01:06

Earlier this year, Lullabot began a four-month-long content strategy engagement for the state of Georgia. The project would involve coming up with a migration plan from Drupal 7 to Drupal 8 for 85 of their state agency sites, with an eye towards a future where content can be more freely and accurately shared between sites. Our first step was to get a handle of all the content on their existing sites. How much content were we dealing with? How is it organized? What does it contain? In other words, we needed a content inventory. Each of these 85 sites was its own individual install of Drupal, with the largest containing almost 10K unique URLs, so this one was going to be a doozy. We hadn't really done a content strategy project of this scale before, and our existing toolset wasn't going to cut it, so I started doing some research to see what other tools might work. 

Open up any number of content strategy blogs and you will find an endless supply of articles explaining why content inventories are important, and templates for storing said content inventories. What you will find a distinct lack of is the how: how does the data get from your website to the spreadsheet for review? For smaller sites, manually compiling this data is reasonably straightforward, but once you get past a couple hundred pages, this is no longer realistic. In past Drupal projects, we have been able to use a dump of the routing table as a great starting point, but with 85 sites even this would be unmanageable. We quickly realized we were probably looking at a spider of some sort. What we needed was something that met the following criteria:

  • Flexible: We needed the ability to scan multiple domains into a single collection of URLs, as well as the ability to include and exclude URLs that met specific criteria. Additionally, we knew that there would be times when we might want to just grab a specific subset of information, be it by domain, site section, etc. We honestly weren't completely sure what all might come in handy, so we wanted some assurance that we would be able to flexibly get what we needed as the project moved forward.
  • Scalable: We are looking at hundreds of thousands of URLs across almost a hundred domains, and we knew we were almost certainly going to have to run it multiple times. A platform that charged per-URL was not going to cut it.
  • Repeatable: We knew this was going to be a learning process, and, as such, we were going to need to be able to run a scan, check it, and iterate. Any configuration should be saveable and cloneable, ideally in a format suitable for version control which would allow us to track our changes over time and more easily determine which changes influenced the scan in what ways. In a truly ideal scenario, it would be scriptable and able to be run from the command line.
  • Analysis: We wanted to be able to run a bulk analysis on the site’s content to find things like reading level, sentiment, and reading time. 

Some of the first tools I found were hosted solutions like Content Analysis Tool and DynoMapper. The problem is that these tools charge on a per-URL basis, and weren't going to have the level of repeatability and customization we needed. (This is not to say that these aren't fine tools, they just weren't what we were looking for in terms of this project.) We then began to architect our own tool, but we really didn't want to add the baggage of writing it onto an already hectic schedule. Thankfully, we were able to avoid that, and in the process discovered an incredibly rich set of tools for creating content inventories which have very quickly become an absolutely essential part of our toolkit. They are:

  • Screaming Frog SEO Spider: An incredibly flexible spidering application. 
  • URL Profiler: A content analysis tool which integrates well with the CSVs generated by Screaming Frog.
  • GoCSV: A robust command line tool created with the sole purpose of manipulating very large CSVs very quickly.

Let's look at each of these elements in greater detail, and see how they ended up fitting into the project.

Screaming Frog undefined

Screaming Frog is an SEO consulting company based in the UK. They also produce the Screaming Frog SEO Spider, an application which is available for both Mac and Windows. The SEO Spider has all the flexibility and configurability you would expect from such an application. You can very carefully control what you do and don’t crawl, and there are a number of ways to report the results of your crawl and export it to CSVs for further processing. I don’t intend to cover the product in depth. Instead, I’d like to focus on the elements which made it particularly useful for us.

Repeatability

A key feature in Screaming Frog is the ability to save both the results of a session and its configuration for future use. The results are important to save because Screaming Frog generates a lot of data, and you don’t necessarily know which slice of it you will need at any given time. Having the ability to reload the results and analyze them further is a huge benefit. Saving the configuration is key because it means that you can re-run the spider with the exact same configuration you used before, meaning your new results will be comparable to your last ones. 

Additionally, the newest version of the software allows you to run scans using a specific configuration from the command-line, opening up a wealth of possibilities for scripted and scheduled scans. This is a game-changer for situations like ours, where we might want to run a scan repeatedly across a number of specific properties, or set our clients up with the ability to automatically get a new scan every month or quarter.

Extraction undefined

As we explored what we wanted to get out of these scans, we realized that it would be really nice to be able to identify some Drupal-specific information (NID, content type) along with the more generic data you would normally get out of a spider. Originally, we had thought we would have to link the results of the scan back to Drupal’s menu table in order to extract that information. However, Screaming Frog offers the ability to extract information out of the HTML in a page based on XPath queries. Most standard Drupal themes include information about the node inside the CSS classes they create. For instance, here is a fairly standard Drupal body tag.

<body class="html not-front not-logged-in no-sidebars page-node page-node- page-node-68 node-type-basic-page">

As you can see, this class contains both the node’s ID and its content type, which means we were able to extract this data and include it in the results of our scan. The more we used this functionality, the more uses we found for it. For instance, it is often useful to be able to identify pages with problematic HTML early on in a project so you can get a handle on problems that are going to come up during migration. We were able to do things like count the number of times a tag was used within the content area, allowing us to identify pages with inline CSS or JavaScript which would have to be dealt with later.

We’ve only begun to scratch the surface of what we can do with this XPath extraction capability, and future projects will certainly see us dive into it more deeply. 

Analytics undefined

Another set of data you can bring into your scan is associated with information from Google Analytics. Once you authenticate through Screaming Frog, it will allow you to choose what properties and views you wish to retrieve, as well as what individual metrics to report within your result set. There is an enormous number of metrics available, from basics like PageViews and BounceRate to extended reporting on conversions, transactions, and ad clicks. Bringing this analytics information to bear during a content audit is the key to identifying which content is performing and why. Screaming Frog also has the ability to integrate with Google Search Console and SEO tools like Majestic, Ahrefs, and Moz.

Cost

Finally, Screaming Frog provides a straightforward yearly license fee with no upcharges based on the number of URLs scanned. This is not to say it is cheap, the cost is around $200 a year, but having it be predictable without worrying about how much we used it was key to making this part of the project work. 

URL Profiler undefined

The second piece of this puzzle is URL Profiler. Screaming Frog scans your sites and catalogs their URLs and metadata. URL Profiler analyzes the content which lives at these URLs and provides you with extended information about them. This is as simple as importing a CSV of URLs, choosing your options, and clicking Run. Once the run is done, you get back a spreadsheet which combines your original CSV with the data URL Profiler has put together. As you can see, it provides an extensive number of integrations, many of them SEO-focused. Many of these require extended subscriptions to be useful, however, the software itself provides a set of content quality metrics by checking the Readability box. These include

  • Reading Time
  • 10 most frequently used words on the page
  • Sentiment analysis (positive, negative, or neutral)
  • Dale-Chall reading ease score
  • Flesh-Kincaid reading ease score
  • Gunning-Fog estimation of years of education needed to understand the text
  • SMOG Index estimation of years of education needed to understand the text

While these algorithms need to be taken with a grain of salt, they provide very useful guidelines for the readability of your content, and in aggregate can be really useful as a broad overview of how you should improve. For instance, we were able to take this content and create graphs that ranked state agencies from least to most complex text, as well as by average read time. We could then take read time and compare it to "Time on Page" from Google Analytics to show whether or not people were actually reading those long pages. 

On the downside, URL Profiler isn't scriptable from the command-line the way Screaming Frog is. It is also more expensive, requiring a monthly subscription of around $40 a month rather than a single yearly fee. Nevertheless, it is an extremely useful tool which has earned a permanent place in our toolbox. 

GoCSV​

One of the first things we noticed when we ran Screaming Frog on the Georgia state agency sites was that they had a lot of PDFs. In fact, they had more PDFs than they had HTML pages. We really needed an easy way to strip those rows out of the CSVs before we ran them through URL Profiler because URL Profiler won’t analyze downloadable files like PDFs or Word documents. We also had other things we wanted to be able to do. For instance, we saw some utility in being able to split the scan out into separate CSVs by content type, or state agency, or response code, or who knows what else! Once again I started architecting a tool to generate these sets of data, and once again it turned out I didn't have to.

GoCSV is an open source command-line tool that was created with the sole purpose of performantly manipulating large CSVs. The documentation goes into these options in great detail, but one of the most useful functions we found was a filter that allows you to generate a new subset of data based on the values in one of the CSV’s cells. This allowed us to create extensive shell scripts to generate a wide variety of data sets from the single monolithic scan of all the state agencies in a repeatable and predictable way. Every time we did a new scan of all the sites, we could, with just a few keystrokes, generate a whole new set of CSVs which broke this data into subsets that were just documents and just HTML, and then for each of those subsets, break them down further by domain, content type, response code, and pre-defined verticals. This script would run in under 60 seconds, despite the fact that the complete CSV had over 150,000 rows. 

Another use case we found for GoCSV was to create pre-formatted spreadsheets for content audits. These large-scale inventories are useful, but when it comes to digging in and doing a content audit, there’s just way more information than is needed. There were also a variety of columns that we wanted to add for things like workflow tracking and keep/kill/combine decisions which weren't present in the original CSVs. Once again, we were able to create a shell script which allowed us to take the CSVs by domain and generate new versions that contained only the information we needed and added the new columns we wanted. 

What It Got Us

Having put this toolset together, we were able to get some really valuable insights into the content we were dealing with. For instance, by having an easy way to separate the downloadable documents from HTML pages, and then even further break those results down by agency, we were able to produce a chart which showed the agencies that relied particularly heavily on PDFs. This is really useful information to have as Georgia’s Digital Services team guides these agencies through their content audits. 

undefined

One of the things that URL Profiler brought into play was the number of words on every page in a site. Here again, we were able to take this information, cut out the downloadable documents, and take an average across just the HTML pages for each domain. This showed us which agencies tended to cram more content into single pages rather than spreading it around into more focused ones. This is also useful information to have on hand during a content audit because it indicates that you may want to prioritize figuring out how to split up content for these specific agencies.

undefined

Finally, after running our scans, I noticed that for some agencies, the amount of published content they had in Drupal was much higher than what our scan had found. We were able to put together the two sets of data and figure out that some agencies had been simply removing links to old content like events or job postings, but never archiving it or removing it. These stranded nodes were still available to the public and indexed by Google, but contained woefully outdated information. Without spidering the site, we may not have found this problem until much later in the process. 

Looking Forward

Using Screaming Frog, URL Profiler, and GoCSV in combination, we were able to put together a pipeline for generating large-scale content inventories that was repeatable and predictable. This was a huge boon not just for the State of Georgia and other clients, but also for Lullabot itself as we embark on our own website re-design and content strategy. Amazingly enough, we just scratched the surface in our usage of these products and this article just scratches the surface of what we learned and implemented. Stay tuned for more articles that will dive more deeply into different aspects of what we learned, and highlight more tips and tricks that make generating inventories easier and much more useful. 

Jeff Geerling's Blog: Hosted Apache Solr now supports Drupal Search API 8.x-2.x, Solr 7.x

December 19, 2018 - 23:50

Earlier this year, I completely revamped Hosted Apache Solr's architecture, making it more resilient, more scalable, and better able to support having different Solr versions and configurations per customer.

Today I'm happy to officially announce support for Solr 7.x (in addition to 4.x). This means that no matter what version of Drupal you're on (6, 7, or 8), and no matter what Solr module/version you use (Apache Solr Search or Search API Solr 1.x or 2.x branches), Hosted Apache Solr is optimized for your Drupal search!

Evolving Web: 5 Things New Drupal Site Builders Struggle With

December 19, 2018 - 23:15

I’ve recently been researching, writing, and talking about the content editor experience in Drupal 8. However, in the back of my mind I’ve been reflecting on the site builder experience. Every developer and site builder who learns Drupal is going to use the admin UI to get their site up-and-running. What are some things site builders often struggle with in the admin UI when learning Drupal?

Blocks

For most Drupal site builders, the Block layout page is key to learning how Drupal works. However, there is more to Blocks than just the Block layout page. You can also create different types of blocks with different fields in Drupal 8.

Site builders new to Drupal don’t usually stumble across the Block Types page on their own. In fact, I think a lot of site builders don’t know about block types at all. Probably because "Block Types" is not listed in the in the 2nd level of the administration menu under “Structure”, but instead buried in the third level of the menu.

Similarly, site builders might never find the “Custom block library” page for creating block content. Depending on how blocks are being used on a particular site, this page might be more logically nested under “Content”.

Many users never find the “Demonstrate block regions” link, a really key page for anyone learning how Drupal works and what regions are. Most Drupal site builders who see this page for the first time are delighted, so making this link more prominent might be an easy way to improve the experience for site builders.

Appearance

Typically, a Drupal site has two themes: the default/front-end theme and the admin/back-end theme. The appearance page doesn’t make this clear. Some site builders learning Drupal end up enabling an admin theme on the front-end or a front-end theme for the admin UI. I think the term "default theme" is confusing for new users. And making a consistent UI for setting a theme as the default theme or the admin theme would be a nice improvement.

Install vs. Download

The difference between installing and downloading a module is not laid out clearly. If someone is trying Drupal for the first time, they’ll likely use the UI to try and install modules, rather than do it through the command line. In the UI, they see the link to “Install New Module”. Once this is done, it seems like the module should be installed. Even though they have the links available to “Enable newly installed modules”, they might not read these options carefully. I think re-labelling the initial link to "Download New Module" might help here.

Most users are also confused about how to uninstall a module. They don’t know why they can’t uncheck a checkbox on the "Extend" page. Providing a more visible link to the uninstall page from installed modules might help with this.

Configuration Management

The UI for configuration management is pretty hidden in Drupal 8. In practice, configuration management is something we typically do via the command line, this is how most seasoned Drupalers would import/export configuration. However, for someone learning how Drupal 8 works, they’re going to be learning initially from the UI. And at the moment, site builders are virtually unaware of Configuration Management and how it affects their work.

Having some kind of simple reminder in the UI to show site builders the status of their configuration could go a long way to them understanding the configuration management workflow and that they should be using it.

The Admin Toolbar

Everyone loves the admin toolbar module. Once it’s installed, site builders are happy and ask “Why isn’t this part of Drupal core?”

But, for a certain set of people, it’s not clear that the top-level of this navigation is clickable. The top-level pages for “Configuration” and “Structure” are index pages that we don’t normally visit. But the “Content” page provides the content listing, and the “Extend” page shows use all our modules. These are obviously key pages. Imagine trying to learn Drupal if you don’t realize you can click on these pages for the first week. But users who are used to not being able to click top-level elements might simply miss these pages. Does anyone know a good way to signal that these are clickable?

What's Next?

I would love to hear how you think we should improve the admin UI for site builders and if you have any thoughts on my suggestions. 

One thing that I'm very excited about that's already happening is a new design to modernize the look and feel of the Admin UI in Drupal. This will go a long way to making Drupal seem more comfortable and easy to use for everyone, content editors and site builders alike. You can see the new designs here.

+ more awesome articles by Evolving Web

TEN7 Blog's Drupal Posts: Episode 049: Jeff Robbins

December 19, 2018 - 22:48
In this episode, Ivan is joined by his friend Jeff Robbins, entrepreneur, co-founder of Lullabot, founder of Yonder, an executive coach, an author, a signed recording artist and a self-proclaimed philosopher Here's what we're discussing in this podcast: Discovering the meaning of a fortnight; Jeff's background; Boston's universities tour; Starting at O'Reilly Media; Gopher & open source software; Orbit, one of the first bands on the internet; Signing with A&M Records; Surviving record labels; Intellectual property, algorithms and paradigms; The complexity of band management; On the Lollapalooza tour with Snoop Doggy Dogg; The unintentional result of subliminal intentionality; 123 Astronaut; The transition from music to Drupal management; Lullabot, one of the first fully distributed companies; DrupalCon, Vancouver 2006, let the hiring begin Yonder, a traveler's guide to distribution; The future of corporate distribution; Writing a new book; One Minute Manager, Make Friends and Influence People and Tribal Leadership.

Drupal.org blog: What’s new on Drupal.org? - November 2018

December 19, 2018 - 00:44

Read our Roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community.

DrupalCon Give thanks with your Drupal Family at DrupalCon

What better way to connect with your Drupal family and give thanks for Drupal's impact than at DrupalCon? Still need to register? Coming on your own? Now's a great time to lock in a good price.

If your organization is sponsoring your trip, consider investing those end of year budgets in your registration... and invest in Drupal's success while you do it. Your team can also sign up for or renew your Supporting Partnership for steep discounts on ticket prices.

The schedule is available now; check out the specialty sessions and register before prices go up!

Kicking off planning for DrupalCon Amsterdam

Members of the Drupal Association team traveled to Europe to meet with Kuoni Congress and, the DrupalCon Europe advisory committee do a kick-off meeting and deep dive on the event planning for DrupalCon Amsterdam. This was our opportunity to dive deep into the event with the team, and it was a tremendously productive 2-day session.

More news about Amsterdam will be coming soon, so check back at https://events.drupal.org/amsterdam2019 soon!

(Image courtesy of Baddy Breidert)

Drupal.org Updates New telemetry data about Drupal usage

In November we also re-architected the way we parse data from sites that call back to Drupal.org for updates. This allowed us to learn more about how Drupal is used in the wild. This graph shows the current distribution of PHP versions for Drupal 8 sites. Notably only about 20% of Drupal 8 sites are still using PHP 5, so the migration effort for the community may not be as big as some expected when PHP 5 reaches end of life.

Finding a new Technical Program Manager

As Tim has stepped into the role of interim executive director, we've been looking to bring a new team member onboard to backfill some of his technical responsibilities. In November we interviewed candidates for our Technical Program Manager position. We're excited to have our new team member join in the new year!

Drupal.org/community changes live!

The changes outlined to the Drupal.org community home page that we outlined in our October update are now live. This new entry point to the Drupal community addresses the many different needs that a new member of our community might have, and the different personas that they might represent. The home of the community will continue to evolve over time, so expect to see more updates soon, and please offer your feedback here.

Drupal Association Updates Executive search firm selected

As you know, we've begun the process of looking for our next Executive Director. In November we interviewed executive search firms to help us with this process, and in early December we announced that we've selected Lehman Associates to help us with our search. If you would like to read the candidate profile, or contact Lehman Associates to offer candidate suggestions or provide other feedback, please use the button below.

View the profile

———

As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who make it possible for us to work on these projects. In particular, we want to thank:

If you would like to support our work as an individual or an organization, consider becoming a member of the Drupal Association.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

Debug Academy: The new layout builder’s impact on Drupal’s evolving learning curve

December 19, 2018 - 00:44
Author: Ashraf Abed, Jack Garratt

Developers and organizations alike continue to use Drupal for large-scale projects due to its modular architecture, solid data model, community, security team, stability, and good-fit for many “ambitious” projects. However, historically, Drupal has caught considerable flak for its unintuitive development process - that might finally be changing.

I have been teaching Drupal development at Debug Academy ( https://debugacademy.com ) for over 4 years now. One recent development piqued my interest, especially in regards to teaching newcomers how to build sites using Drupal, and that is the new layout builder in Drupal core. The layout builder was released as an experimental Drupal core module in Drupal 8.6.

Drupal 8.6, and the layout builder (experimental at the time), were released on September 5, 2018, and our summer semester was scheduled to end on September 9th. Due to a combination of excitement and a desire to teach what I firmly believe to be the soon-to-be standard technique for building Drupal websites, we did not wait to use it. We built the entire website except for internal landing pages themselves before September 5th, the day the layout builder was released and 4 days before the conclusion of our semester. We combined custom block types with the new layout builder to finish building the website.

And my, it was easy to understand and to teach. Since then, we’ve integrated the layout builder more deeply into our subsequent semester and the results bode well for Drupal’s future. The client loved the new layout builder functionality, and the students who were newest to programming felt more comfortable than ever when laying out our custom landing pages. But don’t take my word for it, let’s take a look at the data.

How pages are built in Drupal Common use cases for page creation include:
  • Advanced custom pages

    • Developers can build individual pages (routes) using raw PHP to pass information from various sources (such as internal or external APIs) to the page.

  • Standardized pages as content

    • Users create content (“nodes” in Drupal nomenclature) with standard layouts for displaying the content’s data.

  • Pages with custom overrides

    • Developers can override pages’ default output and pass additional data to specific pages with PHP.

  • Semi-structured content

    • Users can embed standardized “blocks” of content in or around otherwise standardized pages.

  • Drag & Drop content placement

    • Content editors can drag & drop specific types of content through UI onto pages.

Drupal’s solutions for these use cases
  • Advanced custom pages

    • Custom route built in PHP utilizing Drupal’s API

    • HTML with Twig

  • Standardized pages as content

    • Create pages using the UI. Similar to creating a blog post.

    • Without writing any code, site builders can also create pages featuring different types of fields using Drupal’s content type system.

  • Pages with custom overrides

    • Create pages using the UI. Similar to creating a blog post.

    • Pass additional variables (such as a newsletter block) to the page with PHP

    • Render variables using Twig

    • Write any markup (HTML) directly in the Twig file

  • Semi-structured content:

    • Create “paragraph” types (sets of fields)

    • Embed “paragraph” content within a field on the content type

  • Drag & Drop content placement (the new layout builder!)

    • Create pages using the UI. Similar to creating a blog post.

    • Use drag & drop to place / remove layouts, sections, fields, blocks, and more!

    • Can be done per content type and (optionally) overridden per page

Comparison of the content creation options in Drupal

Drupal provides an abundance of solutions to problems content editors face. Different tools require different skillsets, so although Drupal has solutions for everything, let’s do a deeper comparison of the Drupal 8 content editing options to see what is available, what it can do, and how accessible it is to you and your team. The new layout builder is highlighted in the rightmost column:

 

Advanced custom pages

Standardized pages as content

Pages with custom overrides

Semi- structured content

Drag & Drop content placement (new layout builder!)

How to: initial set up

  • Git

  • PHP

  • Twig

  • YML

  • Site building

  • Site building

  • Git

  • PHP

  • Twig

  • YML

  • Site building

  • Site building

  • Site building

How to: add'l pages w/same layout

Depends on implementation

Content editing

Content editing

Content editing

Content editing

How to: add'l pages w/custom field layouts

  • Git

  • PHP

  • Twig

  • YML

  • Site building

N/A

  • Git

  • PHP

  • Twig

  • YML

  • Site building

*Content editing

 

*

pre-grouped sets of fields must all be displayed sequentially

Content editing

How to: add'l pages w/custom layouts, including mixing & matching blocks, fields, & more

  • Git

  • PHP

  • Twig

  • YML

  • Site building

N/A

  • Git

  • PHP

  • Twig

  • YML

  • Site building

N/A

Content editing

How to: Component-based design support using Drupal blocks within content area

Custom code only

N/A

Custom code only

“Pre-grouped” fields can be designed as components, but placement is very restricted

Fully supported

Uses core/contrib

Core

Core

Core

Contrib (paragraphs)

Core (experimental - full release expected in 8.7!)

In practice: what do the numbers show?

Through our courses we have been able to collect data about how the switch from other approaches (namely PHP / Twig & paragraphs) to the new layout builder has impacted the learning curve, productivity, and ongoing maintenance of projects. Debug Academy’s Jack Garratt ran the numbers for us. For the sake of our analysis, a “task” refers to the smallest unit of work assigned to a developer (Debug Academy student, in this case):

The data: before the layout builder (“Tasks to homepage”)

Let’s take a look at the data following the number and type of task from the start of the project until the completion of the frontpage and set up of internal pages. In the first data set, we’ll look at our projects before use of the layout builder.

We assigned our students to implement PHP pre-process hooks to create Twig variables for Views blocks, blocks provided by contrib modules (e.g. newsletter), as well as extracting other non-block data from the website.  Note: There are alternatives in Drupal’s contrib space to using PHP, such as the contributed module Twig Tweak; we do not encourage using Twig Tweak in our projects. The project builds are a success; however, it is also not the most readily grasped by non-web developers and clients.

The breakdown is below:

Task Type

Pre-layout Builder

Up to & including homepage (without layout builder)

Twig

17 (task per section per page)

Site Building

14

Configuration in Code/Site installation and setup)

7

CSS/SASS

6

PHP/Custom Module

10

Initial Theme setup

3

Set up reusable inner page layouts (4 content types)

Twig

8

PHP

4

CSS/SASS

4

PHP and Twig tasks constitute a significant number of the total amount of work completed by our students.  Out of the 17 Twig tasks, we deemed that 10 required using PHP to pass various blocks to the homepage for a variety of use cases, such as embedding Twitter or a Donately form within the page’s content.

On subsequent projects, our students are constructing the pages in the Layout Builder.

The data: using Drupal’s new layout builder (“Tasks to homepage”)

The Layout Builder UI has access to the aforementioned block types.  If there is a specific need that cannot be readily replicated with custom block types with the requisite fields, developers can still use Twig to add that functionality.  However, we found that creating and styling custom block types in Drupal 8 core as reusable components (see component based design) greatly streamlined our process and eliminated the need for many higher complexity tasks. 

What we soon realized was that writing PHP and Twig was less needed in terms of individual page construction, but still rather helpful in crafting custom blocks (components).

Task Type

Pre-layout Builder

Up to & including homepage (with the new layout builder)

Twig

5 (task per component type)

Site Building

18

Configuration in Code/Site installation and setup)

7

CSS/SASS

6

PHP/Custom Module

0

Initial Theme setup

3

Set up reusable inner page layouts (4 content types)

Twig

1 (we’re reusing component types)

PHP

1

CSS/SASS

1

One of the consequences of the de-emphasis on using Twig to create basic pages with custom HTML in favor of the layout builder is allowing a more beneficial division of labor.  Teams may create more tasks that involve basic page building to get their newer developers familiar with the Drupal UI. Moreover, companies can train their clients in how to create basic pages with customized blocks (video block, stylized quote block, or a button block to name a few), which minimizes the instructional curve.  Essentially, the tasks to create highly customized pages no longer require coding. One consequence of using the layout builder for basic page creation in this manner is that it will free up more experienced developers to focus on other tasks or minimizing oversight of newer developers.

The data: full comparison, beyond the homepage

The above charts compare “time to homepage” and “time to generic inner page”. Where you really start to see the benefits are when you’re creating a website with many pages. Let’s look at the combined charts, calling special attention to the newly added last row(s) “Creating additional pages with custom layouts”:

Task Type

Pre-layout Builder

With Layout Builder

Up to & including homepage

Twig

17 (task per section per page)

5 (task per component type)

Site Building

14

18

Configuration in Code/Site installation and setup)

7

7

CSS/SASS

6

6

PHP/Custom Module

3

0

Initial Theme setup

3

3

Set up reusable inner page layouts (4 content types)

Twig

8

1 (reusing component types)

PHP

4

1

CSS/SASS

4

1

Creating additional pages with custom layouts

Twig

37

2

PHP

16

4

Content Editing

47

69

Site building

45

17

Thanks to a combination of component based design and the new layout builder, content editors can create truly custom layouts without writing a line of code. They can, for example, drop a video embed in between the body field and the author field, move the author field to the footer, and arrange other fields in 3 columns without writing a line of code.

Summary & Conclusion:

This specific project built with the layout builder required:

  • 88% less PHP tasks

  • 64% less site building tasks

  • 87% less Twig tasks (thanks to component based design)

  • 46% more content editor tasks

The numbers are clear. Drupal’s layout builder has the potential to bring down the cost of ownership of Drupal websites significantly by enabling content editors and less senior developers to build more of the website. Because of which, the new layout builder will make Drupal more accessible to newer developers and smaller organizations with ambitious goals for their web-powered presence.

It’s time to get up to speed in Drupal 8!

Debug Academy's real projects are the source of the above data, and we encourage you to take a look at our programs. There has never been a better time to add Drupal to your skillset.

We will continue to incorporate cutting edge tech in our courses, including the new layout builder, Composer, Drush, Drupal Console, Object Oriented Programming with PHP7, and much more, all applied to a real, unique, team project! At the end of it all we help students continue to the next phase of their careers! Visit https://debugacademy.com to sign up for one of our free info sessions - our next semester begins January 27th!

P.S. thanks to Dries Buytaert for providing feedback on this post!

Drupal.org blog: What’s new on Drupal.org? - October 2018

December 19, 2018 - 00:29

Read our Roadmap to understand how this work falls into priorities set by the Drupal Association with direction and collaboration from the Board and community.

Lock in your DrupalCon tickets before the end of the year

DrupalCon Seattle is shaping up to be an outstanding conference. If your organization is sponsoring your trip, now's a great time to use your 2018 budget to register to attend. Your team can sign up for or renew your Supporting Partnership for steep discounts on ticket prices. Coming to DrupalCon on your own? The schedule is available now, so peruse the offerings and register before prices go up!

Drupal.org Updates A new taxonomy for DrupalCon sessions

As you've seen if you clicked the link to the schedule above, events.drupal.org was updated to support session submission by tag, rather than track, earlier this year. This provides more flexibility in finding the content you're interested in, and encourages sessions which cross the boundaries of traditional tracks.

Prototyping a new Try Drupal experience

In October we put together a visual prototype of our proposed revamp of the Try Drupal program. This includes a better, more targeted user experience for each persona, as well as the opportunity for more organizations to participate. More details will be shared soon as we get further along, but for a sneak preview you can review the operational update from our recent public board meeting.

Improving the experience of using Composer

In October significant progress was made on the initiative to Improve Drupal Core's use of Composer. In particular, kicking off the primary issue for building this better support into Core, as well as moving the issue for supporting Semantic Versioning for Contrib from a plan to the implementation phase. These changes will improve the user experience for Drupal users with composer based workflows, and especially for Drupal users who start sites without Composer, and then switch to Composer based workflows. This also lays the groundwork for necessary steps for supporting the Drupal 9 roadmap.

Promote Drupal Releasing the first draft of the Drupal Brand Book

In October, with the feedback of the Promote Drupal volunteer team, we developed and released the initial draft of the Drupal Brand book. This is one of the materials created by the Promote Drupal initiative, in order to unify the brand presentation for Drupal across agencies, internal sales, and regions. This will be updated with a vision statement for Drupal's business strategy and market position.

A new Community Section

In October we also spent time creating a beta experience for a new Drupal.org/community landing page. This page focuses on the onboarding process, helping visitors identify their need and persona, so they can get to the segment of the community that is relevant to them. (Hint: this beta experience has since gone live!) If you have feedback about making the community portal better, you can leave your suggestions in the drupal_org_community issue queue.

———

As always, we’d like to say thanks to all the volunteers who work with us, and to the Drupal Association Supporters, who make it possible for us to work on these projects. In particular, we want to thank:

If you would like to support our work as an individual or an organization, consider becoming a member of the Drupal Association.

Follow us on Twitter for regular updates: @drupal_org, @drupal_infra

Sooper Drupal Themes: Create a Drupal Pricing Page with Glazed Builder

December 18, 2018 - 21:44

The pricing page is one of the key pages in a website, therefore it is important to have a clear and professional design that communicates the product benefits and pricing tiers your business offers. In today’s article we are going to learn how to recreate the new Sooperthemes pricing page. The Sooperthemes pricing page has a clean design style that mainly consists of rows, columns and text. Throughout this article we are going to work with the following elements:

  • Rows
  • Columns
  • Text
  • Buttons

OpenSense Labs: Content driven Commerce with Drupal 8: Another Feather in the Hat

December 18, 2018 - 20:53
Content driven Commerce with Drupal 8: Another Feather in the Hat Akshita Tue, 12/18/2018 - 19:23 “Sometimes I would buy Vogue instead of dinner. I felt it fed me more."  — Carrie Bradshaw from The Sex and the City

Consumer instincts have changed with time and so have market tactics. Today, the global brands not only selling the product, but they are also building a journey with the shopper, an impression that stays longer than the product (no pun intended). 

When shopping online it’s about knowing every little detail, almost like visiting the market and buying the product. Shopping is no more just about checking out the product and adding it to cart.

It is here that the commerce meets content.

And that is why everything has a story. This author, the perfume she wears, the website, images, rock, paper, scissors… everything.

Can Drupal provide the commerce organization the storyboard that they are looking for? And what about the conversational UI that is booming in the commerce industry? Can Drupal stand up to the expectations of its customers?

 

The Content-Driven Commerce. What is the Concept About?

Despite the fact that we have been experiencing content and commerce together since the start of marketing, content-commerce has never, until now, existed as a concept in itself. 

The best example of content-driven commerce are print magazines and that is what the retailers and business corporations have been trying to imitate online, today. 

An advertisement does that. Why aren’t we focussing there?

In 2013, Lab24 - an American market research firm - carried out a study that revealed that people had some serious trust issues with advertising. 

  • 76% of people believe that ads are “very exaggerated” or “somewhat exaggerated”.
  • 87% think half or more cleaning ads are photoshopped.
  • 96% think half or more weight loss ads are photoshopped.

Building a personal repo is more important than just throwing content in the form of ads. Meaningful content puts a filter on such garbage advertising also bringing rewarding results!

The fact that businesses can no longer ignore customers’ desire for content, purpose, is what has been changed in the market.

And yet, more often than not, online shops still resemble soulless product catalogs.

The story of Coco Chanel and her perfume No 5 is very beautifully presented in the series of 5. 


With the help of blogs, user-generated content, and rich multimedia, brands are not only able to stand out from the crowd, but also provide a curated commerce experience to their customers. 

However, product-centric their content may be, it still establishes an emotional connection with customers through inspirational stories that pave the road to successful commerce.

Understanding the Concept with Timex

The famous American watchmaker Timex and its 22 sites are built on Drupal. The iconic brand offers intuitive navigation and engaging mix of product, social, and editorial content. This website infuses content in the entire shopping experience, doing an amazing job of featuring useful imageries and text that is relatable to customers.


Timex needed to ensure a unified brand experience on all its sites while also delivering digital content relevant to local markets - allowing the addition of new product content when and where needed.

Drupal ensures that the content generated in the U.S. is localised as per the location and published to local markets according to their needs. 

Not only does it enable the team to deploy content across regional websites rapidly while remaining on-brand, but it also supplements them with an ability in fueling the company’s international growth.

Why Opt For Drupal When Building Content and Commerce?

Helpful content, and not discounts, should be the centerpiece of awareness. And that’s exactly the role that content is meant to play in commerce.

With the Lab24 statistics, it is clear that - while all the e-commerce platforms aim to serve their users with a better experience, without leveraging the power of storytelling it is not possible. 

As the admin to an online store, you need to select and add the various content types that you are looking for. Be it blogs, testimonials, customer reviews, or product description, Drupal has it all for you. 

Drupal is unique in its ability to easily integrate into ambitious commerce architectures in precisely the manner the brand prefers. Drupal can be integrated with other e-commerce platforms giving rise to a hybrid solution. The third-party platforms can typically interact with the users either through the glass as in the case of a headless commerce solution or it can work side by side. 

In any case, Drupal can cover the need for the content driven user experiences with the homepage, marketing-driven landing pages, blog content while commerce features such as the product detail, category landing pages, and the cart and checkout flow can be handled by the e-commerce platform. 

Whatever the case maybe content types are at the core of Drupal. 

  1. Easy Content Authoring: Intuitive tools for content creation, workflow and publishing make it easy for content creators. User permissions, authentication help manage the editorial workflows efficiently. Previews help the editors access how the content will look on any device before the users approve and publish.
     
  2. Mobile Editing: Team members can review, edit and approve content from mobile devices, to keep content and campaigns flowing, regardless of where they are and what device they’re on.
     
  3. In-place Authoring: The WYSIWYG editor in Drupal to create and edit content in-place. 
     
  4. Content Revisioning and Workflows: For a distributed team Drupal enables a quick and easy way to track changes, revisions, and stage. It tells you who did what, when, out of the box. Also, it lets you manage custom, editorial workflows for all your content processes. Content staging allows you to track the status of the content - from creation to review to publication - while managing user roles and actions, automatically. 
     
  5. Content Tagging and Taxonomy: Beyond creating content, Drupal’s strength lies in creating structured content. This comes when you define content elements, tag content based on their attributes, create relevant taxonomy so it can be searched, found, used, and reused in ways that satisfy the visitors.
     
  6. Modules for Multimedia Content: Entity browser, paragraphs, pathauto, admin toolbar, linkit, blog, meta tag, and other content editing modules give the extra lease of life by extending and customizing content features and capabilities. They allow you to choose what features you want for your site. 

With multimedia content, your commerce-based site better serves the need for integrated, unified and hiccup-free user experience. In addition, you can also push content outside from your website to other channels.

As marketing horizons are expanding to social media it is important to deliver highly relevant and personal content via video (YouTube), stores, TV, etc. Brands no more can afford to continue to deliver disconnected and uncoordinated across a variety of different channels.

 

But Trust the Case Studies
  • Benefit


The new content-centric website is an integrated, robust online store managed with SAP Hybris and Drupal. As the marketing department’s needs became more sophisticated, the content management system offered by Hybris was no longer able to adequately manage the store’s front end and content experience. 

Benefit Cosmetics is known for their colorful personality, irreverent voice, and unique dilemma based shopping experience. And so is the content that mirrors the seamless provide a seamless shopping experience. Benefit’s marketing and content teams are now able to maintain the brand’s unique design aesthetic while customizing content for users’ needs. 

The new platform leveraged commerce for over thirty countries. To ensure the sanity of the translations workflow, with Translations.com.
 

  • Strand of Silk


Strand of Silk website required a smart blend of commerce and content, such that the content generated by the editor and user can be easily linked to the products on the website. 

Various other e-commerce only solutions were evaluated, but Drupal was selected because of the ability to easily combine e-commerce and content - a trend that was seen as the de-facto requirement in the near future for e-commerce sites. 

The Rise of Content, Commerce, and Conversation

Content and commerce were coming together for a long time. But conversational commerce is catching up really fast. For consumers, a conversational experience is a way for them to learn about the product and services.

Informal exchange of ideas by the spoken words. 

Shoppers are looking for easy interactions like conversations, which are also casual and convenient. Conversational commerce as it catches up will be the guiding experience moving forward. 

The idea of conversational UI, shouldn’t be limited to a chatbot. The old trick still works. Content still rules. Although the new techniques and technologies can change the way we are doing things we can’t abandon the channels.  

As messaging platforms have become so universal and common, they are also easy to build. 


There can be many ways the model works. It can be one-to-one and one-to-many. Sending messages to the customer who has applauded their service on Facebook, comes under the one-to-one approach. 

But if a new shopping store sends a custom message to a targeted audience segment living in the area it comes under the one-to-many approach.

Add to the scene, the boom of voice assistants. Amazon Alexa and Google Home do actually assist the consumer in finding products, stores, events and much more. 

The Drupal community has been focusing on the bot frameworks and other cognitive services that can be used to develop bots for different use cases.  it all started with a framework called Open Source Bot Builder SDK for Node.js which is used for building bots. 

Further several bot frameworks like Facebook Messenger (wit.ai), Google Dialogflow, IBM Watson, Microsoft Bot Framework and open source conversational AI like Rasa are considered for the integration. 

The main idea was that the bots will enable search and explore the products by incorporating Drupal Commerce APIs. On the basis of message-based interaction, bots can also enable simple Add To Cart and Review Cart functionality among others and can offer relevant actions while looking for a product.
 

Whatever perspective you acquire, integrating content into commerce is easier said than done. The product has to be worthy, content authentic, and the transaction without a breach. Providing a seamless experience to both retailers and publishers, Drupal is the bridge you need. 

Connect with us to build a seamless, content-commerce experience. Drop a mail at hello@opensenselabs.com.

blog banner blog image Commerce E-Commerce Content and Commerce User Experience Drupal 8 Blog Type Articles Is it a good read ? On

ComputerMinds.co.uk: Custom AJAX loading icon

December 18, 2018 - 17:48

There's nothing like Drupal's stock AJAX spinner (this:    Drupal's default blue loading throbber graphic) to make you notice that a site's design hasn't been fully customised. The code from my previous article showing how to fetch a link over AJAX to open in a Foundation reveal popup would suffer from this without some further customisation. After clicking the 'Enquire' button, a loading icon of some kind is needed whilst the linked content is fetched. By default, Drupal just sticks that blue 'throbber' next to the link, but that looks totally out of place. Our client's site uses a loading graphic that feels much more appropriate in style and placement, but my point is that you can set up your own bespoke version. Since it's Christmas, let's add some festive fun! Here's a quick video showing what I'll take you through making:

A few things are needed:

  1. Create a javascript method that will add a custom progress indicator
  2. Ensure the javascript file containing the method is included on the page
  3. Set a custom attribute on the link that will trigger the AJAX
  4. Override Drupal core's javascript method that adds the standard progress throbber, to respect that custom attribute

There are many ways to achieve points 1 and 2. Usually, you would define a library and add it with #attached. But I decided I wanted to treat my work as if it were part of Drupal's core AJAX library itself, rather than something to add separately. So I implemented hook_library_info_alter() in my theme's main .theme file:


/**
 * Implements hook_library_info_alter().
 */
function MYTHEME_library_info_alter(&$libraries, $extension) {
  // Add our own extension to drupal.ajax, which is aware of the page markup so
  // can add AJAX progress loaders in the page.
  if ($extension == 'core' && isset($libraries['drupal.ajax'])) {
    $libraries['drupal.ajax']['js']['/' . drupal_get_path('theme', 'MYTHEME') . '/js/ajax-overrides.js'] = [];
  }
}

My ajax-overrides.js file contains this:


(function ($, window, Drupal, drupalSettings) {
  /**
   * Creates a new Snowman progress indicator, which really is full screen.
   */
  Drupal.Ajax.prototype.setProgressIndicatorSnowman = function () {
    this.progress.element = $(' ');
    // My theme has a wrapping element that will match #main.
    $('#main').append(this.progress.element);
  };
})(jQuery, window, Drupal, drupalSettings); 

My theme happens to then style .ajax-progress-snowman appropriately, to show a lovely snowman in the middle of the page, rather than a tiny blue spinner next to the link that triggered the AJAX. Given that the styling of the default spinner happens to make links & lines jump around, I've got the ajax-progress-fullscreen class in there, to be more like the 'full screen' graphic that the Views UI uses, and avoid the need to add too much more styling myself.

Part 3, adding a custom attribute to specify that our AJAX link should use a Snowman animation, is easily achieved. I've already added the 'data-dialog-type' attribute to my link, so now I just add a 'data-progress-type' attribute, with a value of 'snowman'. I want this to work similarly to the $element[#ajax]['progress']['type'] property that can be set on form elements that use AJAX. Since that only gets applied to form elements, not arbitrary links using the 'use-ajax' class, we have to do the work to pick this up ourselves.

So this is the last part. Back in my ajax-overrides.js file, I've added this snippet to override the standard 'throbber' progress type that AJAX links would otherwise always use. It falls back to Drupal's original method when the progress type isn't specified in a 'data-progress-type' attribute.


  // Override the progress throbber, to actually use a different progress style
  // if the element had something specified.
  var originalThrobber = Drupal.Ajax.prototype.setProgressIndicatorThrobber;
  Drupal.Ajax.prototype.setProgressIndicatorThrobber = function () {
    var $target = $(this.element);
    var progress = $target.data('progressType') || 'throbber';
    if (progress === 'throbber') {
      originalThrobber.call(this);
    }
    else {
      var progressIndicatorMethod = 'setProgressIndicator' + progress.slice(0, 1).toUpperCase() + progress.slice(1).toLowerCase();
      if (progressIndicatorMethod in this && typeof this[progressIndicatorMethod] === 'function') {
        this[progressIndicatorMethod].call(this);
      }
    }
  };

So there you have it - not only can you launch beautiful Foundation Reveal popups from links that fetch content via AJAX, you can now avoid Drupal's little blue throbber animation. If it's an excuse to spread some cheer at Christmas, I'll take it.

Happy Christmas everyone!

DrupalCon News: Community Connection - Mario Hernandez

December 18, 2018 - 15:11

We’re featuring some of the people in the Drupalverse! This Q&A series highlights some of the individuals you could meet at DrupalCon.

Every year, DrupalCon is the largest gathering of people who belong to this community. To celebrate and take note of what DrupalCon means to them, we’re featuring an array of perspectives and some fun facts to help you get to know your community.

Dries Buytaert: Relentlessly eliminating barriers to growth

December 18, 2018 - 10:11

In my last blog post, I shared that when Acquia was a small startup, we were simultaneously focused on finding product-market fit and eliminating barriers to future growth.

Today, Acquia is no longer a startup, but eliminating barriers to growth remains very important after you have outgrown the startup phase. In that light, I loved reading Eugene Wie's blog post called, Invisible asymptotes. Wie was a product leader at Amazon. In his blog post he explains how Amazon looks far into the future, identifies blockers for long-term growth, and turns eliminating these stagnation points into multi-decade efforts.

For example, Amazon considered shipping costs to be a growth blocker, or as Wie describes it, an invisible asymptote for growth. People hate paying for shipping costs, so Amazon decided to get rid of them. At first, solving this looked prohibitively expensive. How can you offer free shipping to millions of customers? Solving for this limitation became a multi-year effort. First, Amazon tried to appease customers' distaste for shipping fees with "Super Saver Shipping". Amazon introduced Super Saver Shipping in January 2002 for orders over $99. If you placed an order of $99 or more, you received free shipping. In the span of a few months, that number dropped to $49 and then to $25. Eventually this strategy led to Amazon Prime, making all shipping "free". While a program like Amazon Prime doesn't actually make shipping free, it feels free to the customer, which effectively eliminates the barrier for growth. The impact on Amazon's growth was tremendous. Today, Amazon Prime provides Amazon an economic moat, or a sustainable competitive advantage – it isn't easy for other retailers to compete from a sheer economic and logistical standpoint.

Another obstacle for Amazon's growth was shipping times. People don't like having to wait for days to receive their Amazon purchase. Several years ago, I was talking to Werner Vogels, Amazon's global CTO, and asked him where most commerce investments were going. He responded that reducing shipping times was more strategic than making improvements to the commerce backend or website. As Wie points out in his blog, Amazon has been working on reducing shipping times for over a decade. First by building a higher density network of distribution centers, and more recently through delivery from local Whole Foods stores, self-service lockers at Whole Foods, predictive or anticipatory shipping, drone delivery, and more. Slowly, but certainly, Amazon is building out its own end-to-end delivery network with one primary objective: reducing shipping speeds.

Every organization has limitations that stunt long-term growth so there are a few important lessons that can be learned from how Amazon approached its invisible asymptotes:

  1. Identify your invisible asymptotes or long-term blockers for growth.
  2. Removing these long-term blockers for growth may look impossible at first.
  3. Removing these long-term blockers requires creativity, patience, persistence and aggressive capital allocation. It can take many initiatives and many years to eliminate them.
  4. Overcoming these obstacles can be a powerful strategy that can unlock unbelievable growth.

I spend a lot of time and effort working on eliminating Drupal's and Acquia's growth barriers so I love these kind of lessons. In a future blog post, I'll share my thoughts about Drupal's growth blockers. In the meantime, I'd love to hear what you think is holding Drupal or Acquia back — be it via social media, email or preferably your own blog.

Promet Source: How to Stop SPAM with Drupal 8's Recaptcha module

December 18, 2018 - 08:43
Have you ever tried logging in or registering to a website and you were asked to identify some distorted numbers and letters and type it into the provided box? That is the CAPTCHA system. The CAPTCHA helps to verify whether your site's visitor is an actual human being or a robot. Not a robot like you see in the Terminator movie but an automated software to generate undesired electronic messages (or content). In short, CAPTCHA protects you from SPAM.  

Flocon de toile | Freelance Drupal: Small sites, large sites, micro sites with Drupal 8

December 18, 2018 - 08:04

Drupal 8 is a tool designed to meet the needs of the most ambitious web projects. We hear a lot about the notions of headless, API first, decoupling, etc. that resolutely allow solid architectures for ambitious projects. But this does not mean that Drupal 8 no longer propels more traditional, and sometimes even much less ambitious sites: simple, small, and even large, websites, but for which we want to benefit from the modularity, flexibility and robustness of Drupal.

Gábor Hojtsy: How to automate testing whether your Drupal 8 module is incompatible with Drupal 9?

December 18, 2018 - 01:23

Drupal 9 is planned to be only 18 months away now, wow! It is already being built in Drupal 8 by marking APIs to be removed in Drupal 9 as deprecated and eventually upgrading some dependency version requirements where needed. Once the Drupal 9 git branch will be open, you will be able to test directly against Drupal 9. That should not stop you from assessing the compatibility of your module with Drupal 9 now. To prepare for compatibility with Drupal 9, you need to keep up with deprecated functionality and watch out for upgraded dependencies (when we know which are those exactly). Of these two, automation can go a long way to help you keep up with deprecated APIs.

DrupalCon News: DrupalCon Seattle: Sessions and Strides

December 18, 2018 - 01:10

DrupalCon Seattle is looking different than the DrupalCons of years past.

The overarching goal when planning DrupalCon Seattle 2019 was to expand both outreach and accessibility so that attendees would be representative of the community as a whole. The value of the conference is in the perspectives, energy and diversity of experiences participants share.

DrupalCon began setting goals to overtly increase diversity starting with DrupalCon Baltimore 2017. This continued in the planning of DrupalCon Nashville 2018, and is prioritized for DrupalCon Seattle 2019.