Wednesday, May 31, 2017

Spurning the Paris Climate Agreement Is the Right Call

Politico is reporting this morning that President Donald Trump will withdraw from the Paris Agreement. Mainstream political commentators have reacted with their customary fearmongering, but Trump’s repudiation of this deal should be welcomed news to anyone genuinely concerned with American political processes, American political independence, and the fortunes of the world’s poorest people.

The first test that the Paris Agreement failed was procedural. Upon its adoption in 2015, Barack Obama and then-Secretary of State John Kerry refrained from designating the international pact as a treaty and instead deemed it an executive agreement. This designation prevented the Paris Agreement from requiring Senate ratification. The executive agreement designation is not uncommon; in fact, its use has far surpassed that of the treaty since World War II. The designation is suitable for the many instances of diplomatic minutiae that need not concern the American public, but considering the Paris Agreement’s potential economic ramifications, Obama and Kerry should have sought the advice and consent of the Senate. Fearing a Senate rejection, however, they opted for the simpler option. As a result, Donald Trump now has the power to rescind our commitment with one stroke of a pen.

That is precisely what he should do.

The Paris Agreement not only fails on procedural grounds, it also fails on substance. Though the agreement does not specify the regulations any particular country must adopt, it nevertheless sacrifices America’s political independence and jeopardizes our energy future.

References to “climate justice” in the preamble and “climate finance” in Article 9 elucidate that temperature-rise attenuation is not the agreement’s lone impetus. Concepts such as “climate justice” and “climate finance” reveal a deeply retributive impulse that motivates the United Nations Framework Convention on Climate Change—the body responsible for the pact. In this intellectual framework, countries that have led the world’s ascent out of pre-industrial squalor are now held in contempt, as global economic gains have been accompanied by an escalation in our atmosphere’s carbon dioxide concentration. This agreement undermines American independence by implying a moral responsibility to directly finance the developing world’s game of catch-up. Furthermore, it suggests that the United States and its peers have reached a level of economic development that should leave them satisfied. This idea is anathema not only to the experiences of millions of American families already struggling to pay the bills without climate measures driving up energy prices, but also to our founding principles of life, liberty, and the pursuit of happiness. Americans have the right to pursue the energy options that best suit them regardless of any international convention.

Article 7 of the document contains another inversion. It advocates for the “urgent and immediate needs of those developing country Parties that are particularly vulnerable to the adverse effects of climate change.” Developing countries—and the world’s poorest people more generally—do indeed need advocates. They face the greatest threat from weather events as they lack the resources to prepare and recover. It is not a warming planet, however, that presents the greatest threat to their well-being: it is an absence of the energy-dependent infrastructure that keeps us comparatively safe in the developed world. Whether they live in southeast Asia or southeast Louisiana, the world’s poor do not need global climate agreements to improve their lives and their ability to withstand weather events—they need affordable energy.

Some Paris proponents, such as the executives from ExxonMobile[1], argue that by remaining a party to the deal we retain a proverbial seat-at-the-table to influence global decision-making. This approach, however, ignores that committing to the agreement offers an explicit endorsement of the problematic elements catalogued above and that as the world’s largest, most innovative economy the United States retains global influence regardless of this agreement. By resolutely standing opposed to the agreement, the United States will indicate that we prioritize the promise of global economic development, food security, and poverty eradication that come with economic freedom and energy development over the promise marginal temperature-rise attenuation. Furthermore, the agreement itself has a provision that would preserve our interest in influencing future developments while withholding our complete supports. Article 16 states: “Parties to the Convention [the UNFCCC, of which we are a member] that are not Parties to this Agreement may participate as observers in the proceedings of any session of the Conference of the Parties serving as the meeting of the Parties to this Agreement.” Additionally, if we stay in the agreement our 46th president, whoever that may be, could use the agreement’s calls for ever-intensifying national plans to justify something along the lines of Barack Obama’s “Clean Power Plan.”

Proponents of the Paris Agreement cower before the risks associated with rising global temperatures, but it is the economic and political risk of remaining a party to the agreement that should truly concern us—especially when we consider that market forces already reduced American greenhouse gas emissions by nearly 10 percent from 2005 to 2014 according to the EPA’s Inventory of U.S. Greenhouse Gas Emissions and Sinks.

President Trump has an opportunity to become a champion of energy development and economic progress for both the American people and the world’s poorest populations. Regardless of economic circumstances—and especially when facing adverse weather conditions—affordable, reliable energy is a guarantor of human flourishing. The Paris Agreement surrenders our political independence and stifles global energy development that benefits people across the planet. Rumors that Trump will reject this deal and rescind the commitment of the previous administration signal brighter days ahead.


[1] On Sunday, Financial Times reported that ExxonMobile chief executive Darren Woods recently penned a personal letter to President Trump urging him to keep the United States within the Paris Agreement. Though some observers have reacted with surprise, this is not a new position for the world’s largest publicly-traded oil and gas company. In fact, in a March 22 letter to the White House, another ExxonMobile executive, Peter Trelenberg, lobbied for continued American commitment to the agreement as well. Trelenberg’s letter—and presumably Woods’—portrays the agreement as a prudent, fair framework for addressing the risks associated with a warming planet.

The post Spurning the Paris Climate Agreement Is the Right Call appeared first on IER.

Optimizing AngularJS Single-Page Applications for Googlebot Crawlers

Posted by jrridley

It’s almost certain that you’ve encountered AngularJS on the web somewhere, even if you weren’t aware of it at the time. Here’s a list of just a few sites using Angular:

  • Upwork.com
  • Freelancer.com
  • Udemy.com
  • Youtube.com

Any of those look familiar? If so, it’s because AngularJS is taking over the Internet. There’s a good reason for that: Angular- and other React-style frameworks make for a better user and developer experience on a site. For background, AngularJS and ReactJS are part of a web design movement called single-page applications, or SPAs. While a traditional website loads each individual page as the user navigates the site, including calls to the server and cache, loading resources, and rendering the page, SPAs cut out much of the back-end activity by loading the entire site when a user first lands on a page. Instead of loading a new page each time you click on a link, the site dynamically updates a single HTML page as the user interacts with the site.

image001.png

Image c/o Microsoft

Why is this movement taking over the Internet? With SPAs, users are treated to a screaming fast site through which they can navigate almost instantaneously, while developers have a template that allows them to customize, test, and optimize pages seamlessly and efficiently. AngularJS and ReactJS use advanced Javascript templates to render the site, which means the HTML/CSS page speed overhead is almost nothing. All site activity runs behind the scenes, out of view of the user.

Unfortunately, anyone who’s tried performing SEO on an Angular or React site knows that the site activity is hidden from more than just site visitors: it’s also hidden from web crawlers. Crawlers like Googlebot rely heavily on HTML/CSS data to render and interpret the content on a site. When that HTML content is hidden behind website scripts, crawlers have no website content to index and serve in search results.

Of course, Google claims they can crawl Javascript (and SEOs have tested and supported this claim), but even if that is true, Googlebot still struggles to crawl sites built on a SPA framework. One of the first issues we encountered when a client first approached us with an Angular site was that nothing beyond the homepage was appearing in the SERPs. ScreamingFrog crawls uncovered the homepage and a handful of other Javascript resources, and that was it.

SF Javascript.png

Another common issue is recording Google Analytics data. Think about it: Analytics data is tracked by recording pageviews every time a user navigates to a page. How can you track site analytics when there’s no HTML response to trigger a pageview?

After working with several clients on their SPA websites, we’ve developed a process for performing SEO on those sites. By using this process, we’ve not only enabled SPA sites to be indexed by search engines, but even to rank on the first page for keywords.

5-step solution to SEO for AngularJS

  1. Make a list of all pages on the site
  2. Install Prerender
  3. “Fetch as Google”
  4. Configure Analytics
  5. Recrawl the site

1) Make a list of all pages on your site

If this sounds like a long and tedious process, that’s because it definitely can be. For some sites, this will be as easy as exporting the XML sitemap for the site. For other sites, especially those with hundreds or thousands of pages, creating a comprehensive list of all the pages on the site can take hours or days. However, I cannot emphasize enough how helpful this step has been for us. Having an index of all pages on the site gives you a guide to reference and consult as you work on getting your site indexed. It’s almost impossible to predict every issue that you’re going to encounter with an SPA, and if you don’t have an all-inclusive list of content to reference throughout your SEO optimization, it’s highly likely you’ll leave some part of the site un-indexed by search engines inadvertently.

One solution that might enable you to streamline this process is to divide content into directories instead of individual pages. For example, if you know that you have a list of storeroom pages, include your /storeroom/ directory and make a note of how many pages that includes. Or if you have an e-commerce site, make a note of how many products you have in each shopping category and compile your list that way (though if you have an e-commerce site, I hope for your own sake you have a master list of products somewhere). Regardless of what you do to make this step less time-consuming, make sure you have a full list before continuing to step 2.

2) Install Prerender

Prerender is going to be your best friend when performing SEO for SPAs. Prerender is a service that will render your website in a virtual browser, then serve the static HTML content to web crawlers. From an SEO standpoint, this is as good of a solution as you can hope for: users still get the fast, dynamic SPA experience while search engine crawlers can identify indexable content for search results.

Prerender’s pricing varies based on the size of your site and the freshness of the cache served to Google. Smaller sites (up to 250 pages) can use Prerender for free, while larger sites (or sites that update constantly) may need to pay as much as $200+/month. However, having an indexable version of your site that enables you to attract customers through organic search is invaluable. This is where that list you compiled in step 1 comes into play: if you can prioritize what sections of your site need to be served to search engines, or with what frequency, you may be able to save a little bit of money each month while still achieving SEO progress.

3) "Fetch as Google"

Within Google Search Console is an incredibly useful feature called “Fetch as Google.” “Fetch as Google” allows you to enter a URL from your site and fetch it as Googlebot would during a crawl. “Fetch” returns the HTTP response from the page, which includes a full download of the page source code as Googlebot sees it. “Fetch and Render” will return the HTTP response and will also provide a screenshot of the page as Googlebot saw it and as a site visitor would see it.

This has powerful applications for AngularJS sites. Even with Prerender installed, you may find that Google is still only partially displaying your website, or it may be omitting key features of your site that are helpful to users. Plugging the URL into “Fetch as Google” will let you review how your site appears to search engines and what further steps you may need to take to optimize your keyword rankings. Additionally, after requesting a “Fetch” or “Fetch and Render,” you have the option to “Request Indexing” for that page, which can be handy catalyst for getting your site to appear in search results.

4) Configure Google Analytics (or Google Tag Manager)

As I mentioned above, SPAs can have serious trouble with recording Google Analytics data since they don’t track pageviews the way a standard website does. Instead of the traditional Google Analytics tracking code, you’ll need to install Analytics through some kind of alternative method.

One method that works well is to use the Angulartics plugin. Angulartics replaces standard pageview events with virtual pageview tracking, which tracks the entire user navigation across your application. Since SPAs dynamically load HTML content, these virtual pageviews are recorded based on user interactions with the site, which ultimately tracks the same user behavior as you would through traditional Analytics. Other people have found success using Google Tag Manager “History Change” triggers or other innovative methods, which are perfectly acceptable implementations. As long as your Google Analytics tracking records user interactions instead of conventional pageviews, your Analytics configuration should suffice.

5) Recrawl the site

After working through steps 1–4, you’re going to want to crawl the site yourself to find those errors that not even Googlebot was anticipating. One issue we discovered early with a client was that after installing Prerender, our crawlers were still running into a spider trap:

As you can probably tell, there were not actually 150,000 pages on that particular site. Our crawlers just found a recursive loop that kept generating longer and longer URL strings for the site content. This is something we would not have found in Google Search Console or Analytics. SPAs are notorious for causing tedious, inexplicable issues that you’ll only uncover by crawling the site yourself. Even if you follow the steps above and take as many precautions as possible, I can still almost guarantee you will come across a unique issue that can only be diagnosed through a crawl.

If you’ve come across any of these unique issues, let me know in the comments! I’d love to hear what other issues people have encountered with SPAs.

Results

As I mentioned earlier in the article, the process outlined above has enabled us to not only get client sites indexed, but even to get those sites ranking on first page for various keywords. Here’s an example of the keyword progress we made for one client with an AngularJS site:

Also, the organic traffic growth for that client over the course of seven months:

All of this goes to show that although SEO for SPAs can be tedious, laborious, and troublesome, it is not impossible. Follow the steps above, and you can have SEO success with your single-page app website.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Tuesday, May 30, 2017

Solar Energy International (SEI) Announces New Board Officers

Solar Energy International (SEI) Announces New Board Officers

Solar Energy International is pleased to announce Paul Bony as the new Board President. Paul has a diversified background in marketing and delivering energy efficiency and renewable energy products and services at the utility, manufacturer, distributor, dealer and consumer level and presently works for The Electric Gas Industry Association as their Director of Renewables and Contractor Development.  “I am excited to be following in the footsteps of others who have helped steer this great organization and its incredible staff and instructors to its position of leadership in solar training.  SEI has accomplished great things since its humble beginnings 26 years ago.  With the help of our team, our alumni and our friends we are on course to achieve even more as we play our part to move to a world powered by renewable energy.”  

Ken Gardner was elected as Vice President of the Board. Ken is a long-time PV, microhydro and solar water pumping instructor and is the Founder and CEO of Gardner Energy Services in Utah.  “I have thoroughly enjoyed teaching with SEI for the past eight years. Our goal is to make sure consumers and businesses get the quality they need when installing renewable energy systems. Being on the board allows me to make sure that happens and to make sure that people are treated fairly and learn what is needed to properly serve our customers.”

SEI is deeply grateful to Ed Marston for his seven years of service, including five years as President. With Ed’s leadership, SEI emerged even stronger than ever from a difficult financial situation by consolidating our operations, restructuring the organization, implementing a strategic plan that focused on SEI’s strengths, and expanding its campus in Paonia, Colorado, as well as expanding into Latin America and the Middle East. “I am truly grateful to have had Ed as our Board President, a mentor, a critic, a visionary, a doer during these past five years. Ed has done a remarkable job of helping SEI think bigger so that we truly can have a world powered by renewable energy,” stated Executive Director Kathy Swartz

Many of SEI’s Paonia students had the opportunity to hear Ed during morning introductions on their first day of class. Each week Ed gave a different opening by weaving together Paonia’s local coal-mining, agricultural and marijuana economies,to Thomas Edison, Topsy the elephant and Tesla, to each person’s role in the energy revolution. These often ended in a round of applause (and once a standing ovation).

“I watched Ed build a Board of Directors that will most capably guide SEI along the exciting path that lies ahead.  He sees much further into the future than most of us.  We will miss his vision,” stated Board Secretary/Treasurer Sarah G. Bishop.

Though Ed left SEI and the other boards he served on, it’s all part of his plan. You may see him out doing wind sprints on the trails around Paonia, influencing local politics in the ways that only Ed can, or working to bring more businesses to town.

 

The post Solar Energy International (SEI) Announces New Board Officers appeared first on Solar Training - Solar Installer Training - Solar PV Installation Training - Solar Energy Courses - Renewable Energy Education - NABCEP - Solar Energy International (SEI).

To Utility and Beyond

Susannah Pedigo came to Solar Energy International (SEI) with more than 20 years of management experience in the renewable energy sector. Her resume reads a lot like a recent history of the solar industry as she’s navigated the changes of the emerging market. Each role she took was newly designed to keep up with the changing demands of a growing industry. The unifying role of each job was a strong ability to facilitate communication between the broad set of solar stakeholders; governments and research centers, utilities and residential customers, technical staff and corporate developers. Throughout her career, she’s demonstrated a remarkable skill to bring the requisite technical knowledge of the solar industry, combined with business management experience to meet the needs of the ultimate end user.

Currently the Director of Origination and Business Development at Lendlease, Susannah manages business development and partnership efforts with utilities and other solar stakeholders with the goal of growing the utility-scale and wholesale-distributed generation pipeline in the US. Her job builds upon all the knowledge she’s built partnering with utilities, public organizations  and private business while continually navigating about the technical advancements and legislative terrain of the solar industry. In pursuit of that technical training and for the joy that comes from working on the systems that she deals with daily, Susannah came to SEI for a hands-on solar training course.

Susannah attended SEI’s PV201L: Solar Electric Lab Week (Grid-Direct). During this week-long, hands-on solar training students rotate through multiple PV systems where they fully install and commission systems consisting of modules, inverters, and racking components from a wide range of manufacturers. Of her SEI experience she said. “the curriculum is a good balance of theory and practical applications which I’d highly recommend.”  During her week at SEI, Susannah was able to share her experience with her fellow students all at different points in their careers. Speaking of her time working with Xcel Energy, gave recommendations for interfacing with utilities to the future solar installers in the class.

She reiterated the imperative of solar training for anyone in the industry, saying, “though you may be be working in sales or business development, you will work with technical people everyday. The industry is highly competitive and you’re constantly looking for ways to add value and be cost effective. I need technical knowledge to interact with our technical staff to get the best value and cost effective solution for our clients. This is true for a rooftop system or utility scale solar plant.”

While Susannah’s spent a large part of her career in solar, she didn’t begin there.She began with a degree in Landscape Architecture and a passion for sustainability. She spent ten years working in City Planning on the Front Rage of Colorado before entering the solar industry. Susannah’s first position in solar was at the National Renewable Energy Laboratory (NREL) as the Senior Communication Manager where she built relationships with the Department of Energy to significantly grow NREL’s Solar Research Program. During this time, she also earned an MBA from the University of Denver.

This experience led her to Xcel Energy, the fifth largest investor-owned utility in the United States. While at Xcel Energy, she oversaw the management of the company’s customer-facing renewable energy products such as Solar*Rewards (250+ MW rooftop solar). She was also a key participant in the legislative, rulemaking and subsequent program development efforts for community solar initiatives in Colorado and Minnesota. Her career illustrates what is possible in the rapidly growing solar industry, whether you bring business acumen, legislative knowledge or technical ability. The industry is broad and, with proper training, there’s a place for everyone.

The post To Utility and Beyond appeared first on Solar Training - Solar Installer Training - Solar PV Installation Training - Solar Energy Courses - Renewable Energy Education - NABCEP - Solar Energy International (SEI).

Would Hayek Want a Carbon Tax?

For some time now, a small but vocal group of writers have tried to convince the base of libertarians and conservatives that a carbon tax is actually consistent with their principles. Although I disagree with their arguments, I’m happy to have such a debate, as I think the case against a carbon tax is very strong.

However, what I find disheartening are more recent attempts to hijack the legacies of deceased libertarians in order to bolster support for a carbon tax, even though these icons would probably recoil from this association of their names with the proposal.

The latest example is Ed Dolan’s recent essay for the Niskanen Center that argued Friedrich Hayek would have supported a carbon tax. As I will show, there is no foundation for such a claim; in fact Dolan himself admits that Hayek never wrote on the issue. Beyond that, I’ll providence evidence that Hayek was skeptical of top-down correctives from government officials as a way to address negative externalities.

Dolan on Hayek on Carbon Taxes

Dolan’s first sentence alerts us that his entire article is speculative: “As far as I know, Friedrich Hayek never wrote a word about climate change…” Yet Dolan presses on: “…but two of his most famous works contain arguments that bear directly on this key issue of environmental policy. Judging from what he wrote about the role of science in public policy and the use of knowledge in society,” Dolan goes on to claim Hayek “might have supported a carbon tax.”

Specifically, Dolan analyzes Hayek’s thoughts on two key areas: (1) assembling expertise on scientific matters and (2) using the price system to mobilize “dispersed knowledge.” After reviewing Hayek’s comments on these areas, Dolan concludes: “Going by what he wrote about the proper roles of scientific and dispersed knowledge, I have to conclude that he would have favored a carbon tax over doing nothing.”

Let’s put aside Dolan’s stacking-of-the-deck by using the phrase “doing nothing.” By the same token, interventionists who want to, say, help the poor could tout huge government welfare programs on the one hand, and contrast them with the “doing nothing” approach of relying on capitalism to lift people out of poverty; surely Hayek would see through that rhetorical sleight of hand.

On his two substantive claims, I also disagree with Dolan. I cover each in turn.

Hayek on Scientific Experts

Dolan quotes Hayek, who wrote:

It may be admitted that, as far as scientific knowledge is concerned, a body of suitably chosen experts may be in the best position to command all the best knowledge available—although this is of course merely shifting the difficulty to the problem of selecting the experts.

Dolan then alludes to Hayek’s famous essay, “Why I Am Not a Conservative,” in which Hayek complains that conservative intellectuals too often reject some new finding of science because of its political or moral implications. (For example, Hayek thought it was absurd to doubt the evidence of Darwinian evolution merely on the grounds that its acceptance would lead to immoral consequences.)

So far, so good. Yet Dolan then extrapolates Hayek’s quotations into the following statement, which actually doesn’t follow from Hayek at all:

[S]ince there will be differences in findings among credible scientists, we should test our policy recommendations against a full range of views. If our policies make sense only for a limited part of that range, or only for views that lie outside the current scientific consensus, we should say so.

To repeat, the above quotation is Dolan talking, as if he were merely paraphrasing Hayek; and yet, Dolan never quoted Hayek saying anything to that effect. For all we know, Hayek might have endorsed this opinion, but in terms of his essay, Dolan just made that principle up and attributed it to Hayek.

But let’s grant that the above is Hayek’s view. The whole point of Dolan bringing this up, is to discredit commentators who (in his view) cherry-pick scientific findings outside the mainstream consensus. To show how fair and balanced he is, Dolan picks an example from both the progressive and libertarian sides of the spectrum, namely Joe Romm and a paper by me (and two co-authors).

Here is Dolan’s case for arguing that Hayek would not have heeded the writings of Romm or the Cato critique of a carbon tax that I co-authored:

…[C]ompare this paper by Joe Romm on ThinkProgress with this one by Robert Murphy, Patrick Michaels, and Paul Knappenberger for the Cato Institute. Both papers emphasize the importance of a parameter known as equilibrium climate sensitivity (ECS), which means the amount by which equilibrium surface temperature in degrees C would increase following a doubling of the level of atmospheric CO2. Both papers begin from a widely publicized IPCC estimate that ECS is “likely” to be in the range of 1.5°C to 4.5°C.

However, in further discussion of the implications of ECS for public policy, Murphy et al. focus on recent estimates that place the lower bound as low as a fraction of one degree, maintaining that such estimates “have come to dominate the contemporary scientific literature on the topic.” Romm, on the other hand, acknowledges those same lower estimates, but maintains that they omit “slow feedback” pathways and consequently greatly underestimate the likely warming from a given increase in CO2. “Anyone who tells you the recent literature suggests things will be better than we thought hasn’t read the recent literature,” he goes on to say.

We do not need to know who is right here to see that we are a long way from Hayek’s ideal, in which policy analysts of all ideological persuasions would work from a common range of scientific findings. Instead, we see those with progressive political inclinations highlighting estimates from the pessimistic end of the scientific consensus while those with conservative inclinations highlight optimistic estimates.

The only problem with Dolan’s move here is that he completely mischaracterized the argument in my Cato paper. Ironically, in that paper we are doing exactly what Dolan wanted, when we criticized the Obama Administration’s 2013 estimate of the “social cost of carbon.”

I realize this is a technical issue and I don’t want to get too deep in the weeds in the present essay; the interested reader should refer to pages 3-6 of our Cato study. But here’s the quick version of our argument:

(1) In order to calculate the “social cost of carbon”—which is used by the federal government to estimate the “social benefits” of regulations that reduce greenhouse gas emissions, and also serves as a guide to the “optimal” level of a carbon tax—the Obama Administration established an Interagency Working Group (IWG). It first gave an estimate of the “social cost of carbon” (SCC) in 2010, and then updated the numbers in 2013.

(2) The Obama Administration’s IWG made several adjustments to the parameter inputs into the three computer models it relied upon to calculate the SCC. Yet it did not tune down its input of the Equilibrium Climate Sensitivity (ECS) parameter, even though the peer-reviewed literature was showing that the estimate of this parameter was trending downward. Various recent studies were provided to show that the figure used by the Obama Administration’s IWG (which relied on a 2007 analysis by Roe and Baker) overstated the amount of warming being generated by the computer simulations.

(3) As an independent piece of evidence to bolster this claim, in the Cato study we pointed out that from the 2007 Intergovernmental Panel on Climate Change (IPCC) report to the 2013 report—which were the Fourth and Fifth assessments, respectively—the “likely” range of the ECS went from 2°C – 4.5°C (in the 2007 IPCC report) down to 1.5°C – 4.5°C (in the 2013 IPCC report). In other words, the epitome of the “scientific consensus”—namely, the UN’s own periodic report on the climate science—had reduced the lower range of its estimate of the “likely” sensitivity of the global temperature to an increase in CO2 concentrations, when contrasting the state of the literature in 2013 vs. 2007.

Notice that what we did in the Cato study is thus the exact opposite of what Dolan alleged. (Note: I was of course the economist of the trio of co-authors; climate scientists Pat Michaels and Paul Knappenberger were responsible for these particular arguments.) Dolan alleged that the Cato study started out with the “consensus” IPCC estimate of Equilibrium Climate Sensitivity, and then cherry-picked recent studies with low-ball estimates in order to sow doubt upon the official number. Because of this allegation, Dolan thought Hayek would have dismissed the Cato authors as ideologues who feared to go where the science led.

But as we’ve seen above, that’s the exact opposite of what happened. No, the Cato study pointed to the latest literature to show that the Obama Administration’s estimate of the social cost of carbon was out of date. In order to demonstrate the objectivity of our claims, in the Cato study we showed that the IPCC agreed that the new literature coming in since 2007 meant that the previous estimates of the parameter had been too high.

Now that I have clarified the argument, I anxiously await for Ed Dolan to admit that Hayek would have rejected the Obama Administration’s official pronouncements of the social cost of carbon as being ideologically motivated, relying on unduly pessimistic estimates of the ECS parameter.

Dolan Thinks Hayek Would Have Supported Government Prices Without a Market (!)

In the previous section we saw that Dolan’s own argument would discredit the Obama Administration’s estimates of the social cost of carbon. In this section, we’ll see the weakness in his approach to Hayek’s concept of “dispersed knowledge” and the importance of the market price system in giving that information to everyone. Here’s Dolan’s conclusion:

Markets without prices are what we have now. We have markets for energy, capital goods, and consumer goods. Within each of those markets, producers and consumers make choices based on their own knowledge of time and place and on the prices of labor and materials—but with no prices to carry knowledge of scarcities that exist at the planetary level. As producers, should we use electric power from the coal-fired grid or install solar panels?…There is no carbon price to help us make up our minds.

With a carbon tax, we still would not have scarcity prices that were generated in real markets, but we would at least have prices. As the effects of the carbon tax rippled through the price system, it would transmit the kind of knowledge Hayek regarded as essential…

In short, if you can’t have both prices and markets, it seems that you choose markets without prices if you think ideologically, but prices without markets if you think pragmatically…

Which side would Hayek have come down on? Going by what he wrote about the proper roles of scientific and dispersed knowledge, I have to conclude that he would have favored a carbon tax over doing nothing.

Let’s step back and look at the big picture here. Dolan is claiming that Hayek would have supported a system whereby government officials consult experts and then announce a “price” that is not generated from real market processes. Dolan thinks Hayek’s writings on dispersed knowledge show that he (Hayek) would favor such an approach.

Well, to repeat, Hayek never wrote about climate change or carbon taxes, so this is all speculative. But we actually do know what Hayek thought about government proclamations of “prices” that come from government officials relying on experts—this is known as “market socialism.” And far from endorsing this scheme, Hayek famously fought it with all his might, precisely because of his views on dispersed knowledge.

In case the reader thinks I’m being unfair by going to full-blown socialism, let us quote Hayek on the issue of Pigovian pricing in order to correct so-called negative externalities. This is exactly the grounds on which Dolan and others are claiming that a carbon tax is a “conservative” or “libertarian” policy solution. And yet Hayek warns us:

Perhaps even more instructive is the case of the late Professor A. C. Pigou, the founder of the theory of welfare economics—who at the end of a long life devoted almost entirely to the task of defining the conditions in which government interference might be used to improve upon the results of the market, had to concede that the practical value of these theoretical considerations was somewhat doubtful because we are rarely in a position to ascertain whether the particular circumstance to which the theory refers exist in fact in any given situation. Not because he knows so much, but because he knows how much he would have to know in order to interfere successfully, and because he knows that he will never know all the relevant circumstances, it would seem that the economist should refrain from recommending isolated acts of interference even in conditions in which the theory tells him that they may be sometimes beneficial. [Hayek, Studies in Philosophy, Politics, and Economics, (London, 1969), p. 264.]

So it’s true, Hayek was not in principle opposed to a tax on negative externalities, but as a pragmatic matter he warned that limitations on knowledge should make us very wary of rushing in to “fix” the market outcome when we are operating in a sea of ignorance. This is the exact opposite of Dolan’s conclusions from Hayek’s writings on knowledge.

Conclusion

This is not the first time that someone from the Niskanen Center has misled readers on the legacy of iconic figures. Jerry Taylor in his “Conservative Case for a Carbon Tax,” held up Milton Friedman and Murray Rothbard and at least implied to the reader that they were supporters of his (Taylor’s) view. In response, we at IER pointed out:

(1) Taylor himself in a 1998 piece said that the viewpoint of William Niskanen (namesake of the Center) meant the case for the Kyoto Protocol was “shockingly weak.”

(2) Milton Friedman in 1998 gave a blurb to a “skeptic” book by writing, “This encyclopedic and even-handed survey of the evidence of global warming is a welcome corrective to the raging hysteria about the alleged dangers of global warming. Moore demonstrates conclusively that global warming is more likely to benefit than to harm the general public.”

(3) Murray Rothbard in literally the next sentence after the one Taylor quoted (from Rothbard’s 1973 essay on pollution) explicitly rejects government efforts to use quotas or taxes to reduce pollution to the “optimal” level. Rothbard explains: “Not only would these proposals grant an enormous amount of bureaucratic power to government in the name of safeguarding the ‘free market’; they would continue to override property rights in the name of a collective decision enforced by the State. This is far from any genuine ‘free market’…”

Turning back to Dolan’s more recent piece, I note that he spends a lot of space quoting from free-market environmentalist authors who reject government “carbon pricing.” Dolan rejects their worries and concludes that their resistance is merely ideological.

On the contrary, I think it’s Jerry Taylor and Ed Dolan who are misunderstanding. It’s not a coincidence that so many current conservatives and libertarians reject cap-and-trade and carbon taxes, or that the (correctly interpreted) legacy of famous libertarian icons also cuts against such top-down interventions. Taking into account the dispersed knowledge issues studied by Hayek, and the Public Choice incentive problems studied by William Niskanen, it should be clear why the rank-and-file libertarians are so skeptical of a carbon tax.

The post Would Hayek Want a Carbon Tax? appeared first on IER.

No, Paid Search Audiences Won’t Replace Keywords

Posted by PPCKirk

I have been chewing on a keyword vs. audience targeting post for roughly two years now. In that time we have seen audience targeting grow in popularity (as expected) and depth.

“Popularity” is somewhat of an understatement here. I would go so far as to say that I've heard it lauded in messianic-like “thy kingdom come, thy will be done” reverential awe by some paid search marketers. as if paid search were lacking a heartbeat before the life-giving audience targeting had arrived and 1-2-3-clear’ed it into relevance.

However, I would argue that despite audience targeting’s popularity (and understandable success), we have also seen the revelation of some weaknesses as well. It turns out it’s not quite the heroic, rescue-the-captives targeting method paid searchers had hoped it would be.

The purpose of this post is to argue against the notion that audience targeting can replace the keyword in paid search.

Now, before we get into the throes of keyword philosophy, I’d like to reduce the number of angry comments this post receives by acknowledging a crucial point.

It is not my intention in any way to set up a false dichotomy. Yes, I believe the keyword is still the most valuable form of targeting for a paid search marketer, but I also believe that audience targeting can play a valuable complementary role in search bidding.

In fact, as I think about it, I would argue that I am writing this post in response to what I have heard become a false dichotomy. That is, that audience targeting is better than keyword targeting and will eventually replace it.

I disagree with this idea vehemently, as I will demonstrate in the rest of this article.

One seasoned (age, not steak) traditional marketer’s point of view

The best illustration I've heard on the core weakness of audience targeting was from an older traditional marketer who has probably never accessed the Keyword Planner in his life.

“I have two teenage daughters.” He revealed, with no small amount of pride.

“They are within 18 months of each other, so in age demographic targeting they are the same person.”

“They are both young women, so in gender demographic targeting they are the same person.”

“They are both my daughters in my care, so in income demographic targeting they are the same person.”

“They are both living in my house, so in geographical targeting they are the same person.”

“They share the same friends, so in social targeting they are the same person.”

“However, in terms of personality, they couldn’t be more different. One is artistic and enjoys heels and dresses and makeup. The other loves the outdoors and sports, and spends her time in blue jeans and sneakers.”

If an audience-targeting marketer selling spring dresses saw them in his marketing list, he would (1) see two older high school girls with the same income in the same geographical area, (2) assume they are both interested in what he has to sell, and (3) only make one sale.

The problem isn’t with his targeting, the problem is that not all those forced into an audience persona box will fit.

In September of 2015, Aaron Levy (a brilliant marketing mind; go follow him) wrote a fabulously under-shared post revealing these weaknesses in another way: What You Think You Know About Your Customers’ Persona is Wrong

In this article, Aaron first bravely broaches the subject of audience targeting by describing how it is far from the exact science we all have hoped it to be. He noted a few ways that audience targeting can be erroneous, and even *gasp* used data to formulate his conclusions.

It’s OK to question audience targeting — really!

Let me be clear: I believe audience targeting is popular because there genuinely is value in it (it's amazing data to have… when it's accurate!). The insights we can get about personas, which we can then use to power our ads, are quite amazing and powerful.

So, why the heck am I droning on about audience targeting weaknesses? Well, I’m trying to set you up for something. I’m trying to get us to admit that audience targeting itself has some weaknesses, and isn’t the savior of all digital marketing that some make it out to be, and that there is a tried-and-true solution that fits well with demographic targeting, but is not replaced by it. It is a targeting that we paid searchers have used joyfully and successfully for years now.

It is the keyword.

Whereas audience targeting chafes under the law of averages (i.e., “at some point, someone in my demographic targeted list has to actually be interested in what I am selling”), keyword targeting shines in individual-revealing user intent.

Keyword targeting does something an audience can never, ever, ever do...

Keywords: Personal intent powerhouses

A keyword is still my favorite form of targeting in paid search because it reveals individual, personal, and temporal intent. Those aren’t just three buzzwords I pulled out of the air because I needed to stretch this already obesely-long post out further. They are intentional, and worth exploring.

Individual

A keyword is such a powerful targeting method because it is written (or spoken!) by a single person. I mean, let’s be honest, it’s rare to have more than one person huddled around the computer shouting at it. Keywords are generally from the mind of one individual, and because of that they have frightening potential.

Remember, audience targeting is based off of assumptions. That is, you're taking a group of people who “probably” think the same way in a certain area, but does that mean they cannot have unique tastes? For instance, one person preferring to buy sneakers with another preferring to buy heels?

Keyword targeting is demographic-blind.

It doesn’t care who you are, where you’re from, what you did, as long as you love me… err, I mean, it doesn’t care about your demographic, just about what you're individually interested in.

Personal

The next aspect of keywords powering their targeting awesomeness is that they reveal personal intent. Whereas the “individual” aspect of keyword targeting narrows our targeting from a group of people to a single person, the “personal” aspect of keyword targeting goes into the very mind of that individual.

Don’t you wish there was a way to market to people in which you could truly discern the intentions of their hearts? Wouldn’t that be a powerful method of targeting? Well, yes — and that is keyword targeting!

Think about it: a keyword is a form of communication. It is a person typing or telling you what is on their mind. For a split second, in their search, you and they are as connected through communication as Alexander Graham Bell and Thomas Watson on the first phone call. That person is revealing to you what's on her mind, and that's a power which cannot be underestimated.

When a person tells Google they want to know “how does someone earn a black belt,” that is telling your client — the Jumping Judo Janes of Jordan — this person genuinely wants to learn more about their services and they can display an ad that matches that intent (Ready for that Black Belt? It’s Not Hard, Let Us Help!). Paid search keywords officiate the wedding of personal intent with advertising in a way that previous marketers could only dream of. We aren’t finding random people we think might be interested based upon where they live. We are responding to a person telling us they are interested.

Temporal

The final note of keyword targeting that cannot be underestimated, is the temporal aspect. Anyone worth their salt in marketing can tell you “timing is everything”. With keyword targeting, the timing is inseparable from the intent. When is this person interested in learning about your Judo classes? At the time they are searching, NOW!

You are not blasting your ads into your users lives, interrupting them as they go about their business or family time hoping to jumpstart their interest by distracting them from their activities. You are responding to their query, at the very time they are interested in learning more.

Timing. Is. Everything.

The situation settles into stickiness

Thus, to summarize: a “search” is done when an individual reveals his/her personal intent with communication (keywords/queries) at a specific time. Because of that, I maintain that keyword targeting trumps audience targeting in paid search.

Paid search is an evolving industry, but it is still “search,” which requires communication, which requires words (until that time when the emoji takes over the English language, but that’s okay because the rioting in the streets will have gotten us first).

Of course, we would be remiss in ignoring some legitimate questions which inevitably arise. As ideal as the outline I've laid out before you sounds, you're probably beginning to formulate something like the following four questions.

  • What about low search volume keywords?
  • What if the search engines kill keyword targeting?
  • What if IoT monsters kill search engines?
  • What about social ads?

We’ll close by discussing each of these four questions.

Low search volume terms (LSVs)

Low search volume keywords stink like poo (excuse the rather strong language there). I’m not sure if there is any data on this out there (if so, please share it below), but I have run into low search volume terms far more in the past year than when I first started managing PPC campaigns in 2010.

I don’t know all the reasons for this; perhaps it’s worth another blog post, but the reality is it’s getting harder to be creative and target high-value long-tail keywords when so many are getting shut off due to low search volume.

This seems like a fairly smooth way being paved for Google/Bing to eventually “take over” (i.e., “automate for our good”) keyword targeting, at the very least for SMBs (small-medium businesses) where LSVs can be a significant problem. In this instance, the keyword would still be around, it just wouldn’t be managed by us PPCers directly. Boo.

Search engine decrees

I’ve already addressed the power search engines have here, but I will be the first to admit that, as much as I like keyword targeting and as much as I have hopefully proven how valuable it is, it still would be a fairly easy thing for Google or Bing to kill off completely. Major boo.

Since paid search relies on keywords and queries and language to work, I imagine this would look more like an automated solution (think DSAs and shopping), in which they make keyword targeting into a dynamic system that works in conjunction with audience targeting.

While this was about a year and a half ago, it is worth noting that at Hero Conference in London, Bing Ads’ ebullient Tor Crockett did make the public statement that Bing at the time had no plans to sunset the keyword as a bidding option. We can only hope this sentiment remains, and transfers over to Google as well.

But Internet of Things (IoT) Frankenstein devices!

Finally, it could be that search engines won’t be around forever. Perhaps this will look like IoT devices such as Alexa that incorporate some level of search into them, but pull traffic away from using Google/Bing search bars. As an example of this in real life, you don’t need to ask Google where to find (queries, keywords, communication, search) the best price on laundry detergent if you can just push the Dash button, or your smart washing machine can just order you more without a search effort.

Image source

On the other hand, I still believe we're a long way off from this in the same way that the freak-out over mobile devices killing personal computers has slowed down. That is, we still utilize our computers for education & work (even if personal usage revolves around tablets and mobile devices and IoT freaks-of-nature… smart toasters anyone?) and our mobile devices for queries on the go. Computers are still a primary source of search in terms of work and education as well as more intensive personal activities (vacation planning, for instance), and thus computers still rely heavily on search. Mobile devices are still heavily query-centered for various tasks, especially as voice search (still query-centered!) kicks in harder.

The social effect

Social is its own animal in a way, and why I believe it is already and will continue to have an effect on search and keywords (though not in a terribly worrisome way). Social definitely pulls a level of traffic from search, specifically in product queries. “Who has used this dishwasher before, any other recommendations?” Social ads are exploding in popularity as well, and in large part because they are working. People are purchasing more than they ever have from social ads and marketers are rushing to be there for them.

The flip side of this: a social and paid search comparison is apples-to-oranges. There are different motivations and purposes for using search engines and querying your friends.

Audience targeting works great in a social setting since that social network has phenomenally accurate and specific targeting for individuals, but it is the rare individual curious about the ideal condom to purchase who queries his family and friends on Facebook. There will always be elements of social and search that are unique and valuable in their own way, and audience targeting for social and keyword targeting for search complement those unique elements of each.

Idealism incarnate

Thus, it is my belief that as long as we have search, we will still have keywords and keyword targeting will be the best way to target — as long as costs remain low enough to be realistic for budgets and the search engines don’t kill keyword bidding for an automated solution.

Don’t give up, the keyword is not dead. Stay focused, and carry on with your match types!

I want to close by re-acknowledging the crucial point I opened with.

It has not been my intention in any way to set up a false dichotomy. In fact, as I think about it, I would argue that I am writing this in response to what I have heard become a false dichotomy. That is, that audience targeting is better than keyword targeting and will eventually replace it…

I believe the keyword is still the most valuable form of targeting for a paid search marketer, but I also believe that audience demographics can play a valuable complementary role in bidding.

A prime example that we already use is remarketing lists for search ads, in which we can layer on remarketing audiences in both Google and Bing into our search queries. Wouldn’t it be amazing if we could someday do this with massive amounts of audience data? I've said this before, but were Bing Ads to use its LinkedIn acquisition to allow us to layer on LinkedIn audiences into our current keyword framework, the B2B angels would surely rejoice over us (Bing has responded, by the way, that something is in the works!).

Either way, I hope I've demonstrated that far from being on its deathbed, the keyword is still the most essential tool in the paid search marketer’s toolbox.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Monday, May 29, 2017

Evidence of the Surprising State of JavaScript Indexing

Posted by willcritchlow

Back when I started in this industry, it was standard advice to tell our clients that the search engines couldn’t execute JavaScript (JS), and anything that relied on JS would be effectively invisible and never appear in the index. Over the years, that has changed gradually, from early work-arounds (such as the horrible escaped fragment approach my colleague Rob wrote about back in 2010) to the actual execution of JS in the indexing pipeline that we see today, at least at Google.

In this article, I want to explore some things we've seen about JS indexing behavior in the wild and in controlled tests and share some tentative conclusions I've drawn about how it must be working.

A brief introduction to JS indexing

At its most basic, the idea behind JavaScript-enabled indexing is to get closer to the search engine seeing the page as the user sees it. Most users browse with JavaScript enabled, and many sites either fail without it or are severely limited. While traditional indexing considers just the raw HTML source received from the server, users typically see a page rendered based on the DOM (Document Object Model) which can be modified by JavaScript running in their web browser. JS-enabled indexing considers all content in the rendered DOM, not just that which appears in the raw HTML.

There are some complexities even in this basic definition (answers in brackets as I understand them):

  • What about JavaScript that requests additional content from the server? (This will generally be included, subject to timeout limits)
  • What about JavaScript that executes some time after the page loads? (This will generally only be indexed up to some time limit, possibly in the region of 5 seconds)
  • What about JavaScript that executes on some user interaction such as scrolling or clicking? (This will generally not be included)
  • What about JavaScript in external files rather than in-line? (This will generally be included, as long as those external files are not blocked from the robot — though see the caveat in experiments below)

For more on the technical details, I recommend my ex-colleague Justin’s writing on the subject.

A high-level overview of my view of JavaScript best practices

Despite the incredible work-arounds of the past (which always seemed like more effort than graceful degradation to me) the “right” answer has existed since at least 2012, with the introduction of PushState. Rob wrote about this one, too. Back then, however, it was pretty clunky and manual and it required a concerted effort to ensure both that the URL was updated in the user’s browser for each view that should be considered a “page,” that the server could return full HTML for those pages in response to new requests for each URL, and that the back button was handled correctly by your JavaScript.

Along the way, in my opinion, too many sites got distracted by a separate prerendering step. This is an approach that does the equivalent of running a headless browser to generate static HTML pages that include any changes made by JavaScript on page load, then serving those snapshots instead of the JS-reliant page in response to requests from bots. It typically treats bots differently, in a way that Google tolerates, as long as the snapshots do represent the user experience. In my opinion, this approach is a poor compromise that's too susceptible to silent failures and falling out of date. We've seen a bunch of sites suffer traffic drops due to serving Googlebot broken experiences that were not immediately detected because no regular users saw the prerendered pages.

These days, if you need or want JS-enhanced functionality, more of the top frameworks have the ability to work the way Rob described in 2012, which is now called isomorphic (roughly meaning “the same”).

Isomorphic JavaScript serves HTML that corresponds to the rendered DOM for each URL, and updates the URL for each “view” that should exist as a separate page as the content is updated via JS. With this implementation, there is actually no need to render the page to index basic content, as it's served in response to any fresh request.

I was fascinated by this piece of research published recently — you should go and read the whole study. In particular, you should watch this video (recommended in the post) in which the speaker — who is an Angular developer and evangelist — emphasizes the need for an isomorphic approach:

Resources for auditing JavaScript

If you work in SEO, you will increasingly find yourself called upon to figure out whether a particular implementation is correct (hopefully on a staging/development server before it’s deployed live, but who are we kidding? You’ll be doing this live, too).

To do that, here are some resources I’ve found useful:

Some surprising/interesting results

There are likely to be timeouts on JavaScript execution

I already linked above to the ScreamingFrog post that mentions experiments they have done to measure the timeout Google uses to determine when to stop executing JavaScript (they found a limit of around 5 seconds).

It may be more complicated than that, however. This segment of a thread is interesting. It's from a Hacker News user who goes by the username KMag and who claims to have worked at Google on the JS execution part of the indexing pipeline from 2006–2010. It’s in relation to another user speculating that Google would not care about content loaded “async” (i.e. asynchronously — in other words, loaded as part of new HTTP requests that are triggered in the background while assets continue to download):

“Actually, we did care about this content. I'm not at liberty to explain the details, but we did execute setTimeouts up to some time limit.

If they're smart, they actually make the exact timeout a function of a HMAC of the loaded source, to make it very difficult to experiment around, find the exact limits, and fool the indexing system. Back in 2010, it was still a fixed time limit.”

What that means is that although it was initially a fixed timeout, he’s speculating (or possibly sharing without directly doing so) that timeouts are programmatically determined (presumably based on page importance and JavaScript reliance) and that they may be tied to the exact source code (the reference to “HMAC” is to do with a technical mechanism for spotting if the page has changed).

It matters how your JS is executed

I referenced this recent study earlier. In it, the author found:

Inline vs. External vs. Bundled JavaScript makes a huge difference for Googlebot

The charts at the end show the extent to which popular JavaScript frameworks perform differently depending on how they're called, with a range of performance from passing every test to failing almost every test. For example here’s the chart for Angular:

Slide5.PNG

It’s definitely worth reading the whole thing and reviewing the performance of the different frameworks. There's more evidence of Google saving computing resources in some areas, as well as surprising results between different frameworks.

CRO tests are getting indexed

When we first started seeing JavaScript-based split-testing platforms designed for testing changes aimed at improving conversion rate (CRO = conversion rate optimization), their inline changes to individual pages were invisible to the search engines. As Google in particular has moved up the JavaScript competency ladder through executing simple inline JS to more complex JS in external files, we are now seeing some CRO-platform-created changes being indexed. A simplified version of what’s happening is:

  • For users:
    • CRO platforms typically take a visitor to a page, check for the existence of a cookie, and if there isn’t one, randomly assign the visitor to group A or group B
    • Based on either the cookie value or the new assignment, the user is either served the page unchanged, or sees a version that is modified in their browser by JavaScript loaded from the CRO platform’s CDN (content delivery network)
    • A cookie is then set to make sure that the user sees the same version if they revisit that page later
  • For Googlebot:
    • The reliance on external JavaScript used to prevent both the bucketing and the inline changes from being indexed
    • With external JavaScript now being loaded, and with many of these inline changes being made using standard libraries (such as JQuery), Google is able to index the variant and hence we see CRO experiments sometimes being indexed

I might have expected the platforms to block their JS with robots.txt, but at least the main platforms I’ve looked at don't do that. With Google being sympathetic towards testing, however, this shouldn’t be a major problem — just something to be aware of as you build out your user-facing CRO tests. All the more reason for your UX and SEO teams to work closely together and communicate well.

Split tests show SEO improvements from removing a reliance on JS

Although we would like to do a lot more to test the actual real-world impact of relying on JavaScript, we do have some early results. At the end of last week I published a post outlining the uplift we saw from removing a site’s reliance on JS to display content and links on category pages.

odn_additional_sessions.png

A simple test that removed the need for JavaScript on 50% of pages showed a >6% uplift in organic traffic — worth thousands of extra sessions a month. While we haven’t proven that JavaScript is always bad, nor understood the exact mechanism at work here, we have opened up a new avenue for exploration, and at least shown that it’s not a settled matter. To my mind, it highlights the importance of testing. It’s obviously our belief in the importance of SEO split-testing that led to us investing so much in the development of the ODN platform over the last 18 months or so.

Conclusion: How JavaScript indexing might work from a systems perspective

Based on all of the information we can piece together from the external behavior of the search results, public comments from Googlers, tests and experiments, and first principles, here’s how I think JavaScript indexing is working at Google at the moment: I think there is a separate queue for JS-enabled rendering, because the computational cost of trying to run JavaScript over the entire web is unnecessary given the lack of a need for it on many, many pages. In detail, I think:

  • Googlebot crawls and caches HTML and core resources regularly
  • Heuristics (and probably machine learning) are used to prioritize JavaScript rendering for each page:
    • Some pages are indexed with no JS execution. There are many pages that can probably be easily identified as not needing rendering, and others which are such a low priority that it isn’t worth the computing resources.
    • Some pages get immediate rendering – or possibly immediate basic/regular indexing, along with high-priority rendering. This would enable the immediate indexation of pages in news results or other QDF results, but also allow pages that rely heavily on JS to get updated indexation when the rendering completes.
    • Many pages are rendered async in a separate process/queue from both crawling and regular indexing, thereby adding the page to the index for new words and phrases found only in the JS-rendered version when rendering completes, in addition to the words and phrases found in the unrendered version indexed initially.
  • The JS rendering also, in addition to adding pages to the index:
    • May make modifications to the link graph
    • May add new URLs to the discovery/crawling queue for Googlebot

The idea of JavaScript rendering as a distinct and separate part of the indexing pipeline is backed up by this quote from KMag, who I mentioned previously for his contributions to this HN thread (direct link) [emphasis mine]:

“I was working on the lightweight high-performance JavaScript interpretation system that sandboxed pretty much just a JS engine and a DOM implementation that we could run on every web page on the index. Most of my work was trying to improve the fidelity of the system. My code analyzed every web page in the index.

Towards the end of my time there, there was someone in Mountain View working on a heavier, higher-fidelity system that sandboxed much more of a browser, and they were trying to improve performance so they could use it on a higher percentage of the index.”

This was the situation in 2010. It seems likely that they have moved a long way towards the headless browser in all cases, but I’m skeptical about whether it would be worth their while to render every page they crawl with JavaScript given the expense of doing so and the fact that a large percentage of pages do not change substantially when you do.

My best guess is that they're using a combination of trying to figure out the need for JavaScript execution on a given page, coupled with trust/authority metrics to decide whether (and with what priority) to render a page with JS.

Run a test, get publicity

I have a hypothesis that I would love to see someone test: That it’s possible to get a page indexed and ranking for a nonsense word contained in the served HTML, but not initially ranking for a different nonsense word added via JavaScript; then, to see the JS get indexed some period of time later and rank for both nonsense words. If you want to run that test, let me know the results — I’d be happy to publicize them.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Friday, May 26, 2017

China and India Will Continue to Increase Oil and Coal Consumption, Paris Agreement Notwithstanding

According to the New York Times, China and India “have greatly accelerated their investments in cost-effective renewable energy sources—and reduced their reliance on fossil fuels,” but “it’s America—Donald Trump’s America—that now looks like the laggard.”[i]

Is this really what’s happening?

According to the BP Statistical Review of World Energy, the United States had the greatest share of wind and solar electricity (5.4 percent) among the 3 countries in 2015—the year of the most recent data available. China had a 3.9 percent share and India had a 3.7 percent share of wind and solar power to total electricity generation. Both China and India are building coal-fired power plants (the United States is not) and both countries are increasing their demand for petroleum. According to the Energy Information Administration, they are even importing oil and petroleum products from the United States.

China

According to Bloomberg, China’s coal-fired generation capacity may increase by as much as 19 percent over the next five years. While the country has canceled some coal-fired capacity due to lack of demand growth, China still plans to increase its coal-fired power plants to almost 1,100 gigawatts, which is three times the coal-fired capacity of the United States. In the chart below, China’s non-fossil fuels include nuclear, hydroelectric, and renewable energy. China has some of the largest hydroelectric facilities in the world.

Source: https://www.oilandgas360.com/wp-content/uploads/2017/05/05162017-China-Energy-Sources.png?x56664

Further, China is also building coal-fired plants in other countries such as Kenya and Pakistan. In Lamu, Kenya, a $2 billion, 1050 megawatt, coal-fired power plant—the first of its kind in East Africa—will be financed with Chinese, South African, and Kenyan capital, and be built by the state-owned Power Construction Corporation of China. The plant will power an adjacent 32 berth deep-water port that is part of a plan to transform Kenya into an industrializing, middle-income country by 2030.[ii]

Excluding projects in South Africa, over 100 coal-generating units are in various stages of planning or development in 11 African countries and China is financing about half of them. The combined capacity of the units is 42.5 gigawatts—over eight times the region’s existing coal capacity. While not all are being financed by China, almost all are financed by foreign investment.

Pakistan is committed to building as many as 12 new coal-fired power plants over the next 15 years as part of a large infrastructure investment project that China and its partners are funding. About $33 billion will be spent on 19 energy projects that include coal-fired power plants, transmission lines, and other infrastructure as part of the China-Pakistan Economic Corridor. The majority of the new generating capacity (roughly 75 percent) will come from the new coal plants.[iii]

While it is true that China is building wind and solar units domestically, many of these units are being curtailed due to lack of infrastructure and a preference for coal. Many of China’s wind turbines have been erected in the northwest part of the country, which is sparsely populated and far from China’s big cities. The construction of transmission lines to move the wind power has not kept up with the demand nor the construction of the wind units.[iv] According to Greenpeace, an average of 19 percent of Chinese wind power was curtailed in the first three quarters of 2016. And, in the Gansu province, 46 percent was curtailed. The Gansu and Xinjiang provinces also saw solar curtailment rates of 39 percent and 52 percent respectively during the first quarter of 2016.[v]

China has established a goal that 40 percent of the vehicles bought within the country will be electric cars or plug-in hybrids by 2030, but its appetite for gas-guzzling SUVs makes achieving that goal unlikely. China’s preference for gasoline-fueled SUVs over electric vehicles is due to their safety and the lack of charging stations for electric vehicles. It is estimated that China will have 150 million SUVs by 2025 (45 percent of its passenger vehicle fleet), up from just four million SUVs in 2010. The surging SUV demand will increase oil consumption in China for at least the next decade, according to estimates from state-owned China National Petroleum Corp., and will more than offset the impact of increasing electric vehicles and hybrids.[vi] China’s transportation sector required 2.5 million barrels of gasoline per day last year, and it is expected to increase until it hits 3.6 million barrels per day in 2024.

Source: https://www.wsj.com/articles/gas-guzzlers-rule-in-china-1495418821?mg=id-wsj&mg=id-wsj

The popularity of SUVs helps to account for China’s recent growth in oil imports. In March, China imported 9.2 million barrels of crude a day—a record. Oil imports for the first quarter were 15 percent higher than a year earlier—a trend that is expected to continue. In February 2017, China was the largest buyer of U.S. crude oil, purchasing 9,575 barrels—almost 5 times more than it purchased from the United States in January.

Source: https://www.forbes.com/sites/judeclemente/2017/05/21/the-great-u-s-oil-export-boom/2/#23f2faee5502

As part of President Trump’s agreement with China, Chinese imports of liquefied natural gas (LNG) from the United States are expected to increase.[vii] While the volume has been small (the United States supplied only one percent of China’s imported LNG in 2016), the U.S. share is expected to increase as U.S. LNG-export capacity increases. In March 2017, it is estimated that the U.S. supplied 7 percent of all Chinese LNG imports.

In 2016, China imported 26.1 million tons of LNG, 32.6 percent more than in 2015. Australia and Qatar supplied the most—65 percent. Bloomberg expects imported natural gas to account for 40 percent of China’s total natural gas consumption by 2020, up from one-third today. Wood Mackenzie expects Chinese LNG demand to triple by 2030, reaching 75 million tons per year.

 

Source: https://www.oilandgas360.com/deal-paves-way-mega-sized-u-s-energy-exports/

India

As the world’s third largest emitter of greenhouse gases, India pledged as part of the Paris Agreement to reduce its carbon emission intensity (carbon emissions per unit of GDP) by 33 to 35 percent from 2005 levels by 2030.[viii] Also, by 2030, India intends to generate 40 percent of its electricity from non-fossil fuels, including nuclear power and renewable energy. Despite its intent to increase its non-fossil generation to 40 percent, coal is primarily used today and will continue to be used in the future. India has plans to build nearly 370 coal-fired power plants.[ix] Between 2006 and 2016, 139 gigawatts of coal-fired capacity was brought on-line.[x] The planned construction of an additional 178 gigawatts would make it nearly impossible for India to meet its climate promises. By developing all of the planned coal-fired capacity, India would increase its coal generating capacity by 123 percent.[xi]

See IER’s blog on India at http://instituteforenergyresearch.org/analysis/despite-paris-accord-india-pakistan-will-continue-use-coal/ .[xii]

Conclusion

China, the United States, and India are the three largest emitters of carbon dioxide in the world and only the United States has decreased its emissions from 2005 levels as the graph below denotes. China and India do not intend to reduce their carbon dioxide emissions—only their carbon intensity—and investing in wind and solar energy will only bring negligible changes as wind only supplied 1.4 percent of the world’s energy in 2015 and solar provided only 0.4 percent.

Source: BP Statistical Review of World Energy, http://www.bp.com/en/global/corporate/energy-economics/statistical-review-of-world-energy/downloads.html


[i] The New York Times, China and India Make Big Strides on Climate Change, May 22, 2017, https://www.nytimes.com/2017/05/22/opinion/paris-agreement-climate-china-india.html?_r=1

[ii] National Geographic, As the World Cuts Back on Coal, a Growing Appetite in Africa, May 10, 2017, http://news.nationalgeographic.com/2017/05/lamu-island-coal-plant-kenya-africa-climate/

[iii] MIT Technology Review, Why India and Pakistan Are Renewing Their Love Affair with Coal, May 3, 2017, https://www.technologyreview.com/s/604323/india-and-pakistans-continued-love-affair-with-coal/

[iv] National Geographic, Three Reasons to Believe in China’s Renewable Energy Boom, May 12, 2017, http://news.nationalgeographic.com/2017/05/china-renewables-energy-climate-change-pollution-environment/

[v] Greenpeace, China releases its energy sector development 13th five year plan: Greenpeace response, January 5, 2017, http://m.greenpeace.org/eastasia/mid/press/releases/climate-energy/2017/China-releases-its-energy-sector-development-13th-five-year-plan-Greenpeace-response/

[vi] Wall Street Journal, Gas Guzzlers Rule in China, May 21, 2017, https://www.wsj.com/articles/gas-guzzlers-rule-in-china-1495418821?mg=id-wsj&mg=id-wsj

[vii] Washington Times, Trump strikes deal to export more natural gas to China, May 12, 2017, http://www.washingtonexaminer.com/trump-strikes-deal-to-export-more-natural-gas-to-china/article/2622933

[viii] The Better India, India Ratifies the Paris Climate Agreement. This is What It Means!, October 4, 2016, http://www.thebetterindia.com/70499/paris-agreement-india-united-nations-convention/

[ix] Phys Org, India’s coal plant plans conflict with climate commitments, April 25, 2017, https://phys.org/news/2017-04-india-coal-conflict-climate-commitments.html#jCp

[x] http://endcoal.org/wp-content/uploads/2017/03/Jan-2017-New-in-India-by-year.pdf

[xi] Phys Org, India’s coal plant plans conflict with climate commitments, April 25, 2017, https://phys.org/news/2017-04-india-coal-conflict-climate-commitments.html#jCp

[xii] Institute for Energy Research, Despite the Paris Accord, India and Pakistan Will Continue to Use Coal, May 18, 2017, http://instituteforenergyresearch.org/analysis/despite-paris-accord-india-pakistan-will-continue-use-coal/

The post China and India Will Continue to Increase Oil and Coal Consumption, Paris Agreement Notwithstanding appeared first on IER.