Subscribe by Email

Your email:

Marketing-relatedblog posts

ilya's blog

Current Articles | RSS Feed RSS Feed

Lynyrd Skynyrd 1/19/2013

 

(Here's my review of the show for New England Music News.)

Amanda Palmer at the Paradise Rock Club 11/17/2012

 

(Here's my review of the show for New England Concert Reviews.)

Fun concert moments of 2012

 

It's been a blast shooting a bunch of shows over the past 8 months.  Here's a few of my favorites. in no particular order:

Slash in Boston

My gallery on TheyWillRockYou.com - here's a teaser:

 

Cheap Trick

Photo gallery here.

Chickenfoot

Sammy Hagar, Michael Anthony, Joe Satriani!  Gallery here.

 

Aerosmith

One of my all-time favorite bands!  Here's my review on TheyWillRockYou.com, and here's the gallery:

TAB the Band

Good rock!  Gallery here.

Dropkick Murphys with the Boston Symphony Orchestra

A fun Boston combo.  Gallery here.

Lez Zeppelin

Gallery here.  (Plant & Page would be proud!)

L.A. Guns

Gallery here.

Dropkick Murphys at Tsongas Arena

Learning to dance at Flannigan's Ball...   Gallery here.

Jackyl

Chain saws?  Yes. Gallery here.

Rob Zombie

No, not too freaky...  Gallery here.

Volbeat

Awesome Danish band.  Rockabilly and metal.  Galleries here and here.

Theory of a Deadman

Gallery here.

Slash at Rocklahoma

Gallery here.

Ted Nugent

Gallery here.

Guns N' Roses at The Ritz

Remember the 1988 MTV special?  The 2012 version was also pretty awesome.

describe the image

Medium Format Fun in Israel

 

Explored several medium format films on a recent trip to Israel, including Ilford XP2 Super, Ektar 100, Ilford 100, Ilford HP5, Vevia 50, and color infrared.

Most of these were taken with the Mamiya C330S, though a few were with the Voigtlander Perkeo I folding camera.  Exposure was "eye-balled."

Experimenting with Infrared Color Film

 

describe the imageI recently decided to experiment with infrared film, both color and B&W.  (Because these films respond differently than our eyes, or traditional films and sensors, you can get some interesting dreamlike and false-color effects.)

Though there's a bunch of options for B&W infrared film, things are a bit trickier with infrared color film - the last commercially available film, Kodak's Aerochrome, was discontinued in 2010.  So I got a few medium format (120) rolls from photographer Dean Bennici (check out Dean's infrared photos here).

I took the film on a recent trip to Israel, shooting it with a Mamiya C330S medium format camera.  Some notes:

  1. For the most part, I shot it with either a yellow or red filter, shooting it as ISO 400 (and overexposing by a stop when using the red filter).
  2. It's slide film, so processed it using the E6 process.
  3. Then, high-res scans.
  4. I had a bunch of questions, all of which Dean answered (thanks again, Dean!) - everything from helpful info on loading technique, exposure and processing, pros/cons of various filters.

Here are some of the results:

Observations:

  1. I wasn't expecting every shot to work (and indeed, every shot didn't!) But some did, with a pretty unique result.  For example, the red-looking cactus, or the surreal landscape and (Dead Sea) seascape.
  2. Some IR film is really sensitive to handling and exposure.  This film seemed fine - I loaded it in daylight, didn't think too hard about the exact exposure (guesstimating the right exposure using the Sunny 16 rule), and more often than not, was happy with the results.
  3. For a couple of the shots where I wasn't sure of the exposure (and it was "worth it" to spend another frame), I bracketed, exposing it an extra stop.
  4. In case you're looking to experiment - this film isn't going to be around forever.
  5. That said, there's all sorts of cool options emerging with infrared digital photography, so this technique and art form isn't going away.

A Year in the Life of a Startup: A Marketer’s Checklist

 

I recently passed the one year mark at my current startup.  As I looked back at the things we’ve been able to do over the past year, and compared and contrasted that with the first year’s goals and efforts at my other two startups and those of my friends running marketing elsewhere, it occurred to me that there’s a great deal of overlap (probably because there’s a lot of commonality among our goals, efforts, and achievements).

So, I figured I’d make a checklist of sorts – the things one has hopefully accomplished by the end of the startup’s first year in the marketplace. To some extent it’s a follow-up to my earlier eBook, “Building the Marketing Plan: A Blueprint for Startups.”  Though this time around, it’s less about putting together a plan, and more about proposing a specific criteria for evaluating progress during, and at the end of, the first year.

This is by no means a definitive guide or a comprehensive list, but rather a proposal for what might be reasonable for a startup to accomplish, and questions you probably want to answer, within the first year.  (And of course, depending on your specific situation, some of these may be much harder/easier, and more/less important than others.)

So, here's the full eBook, as well as a summary checklist below:

View more presentations from IlyaMirman
(The eBook covers each of the items below.)

Marketing

Refining the Web site

  • Identify key personas
  • Develop content for each persona
  • Develop content for each stage of sales process
  • Rich set of calls-to-action
  • Knowing which pages are popular, which are not, and why
  • Frequently measuring, tweaking, experimenting with content on your site (not only the home page – but also the calls-to-action, organization of the pages, layout)

Differentiated positioning

  • Clear, compelling to all prospect personas
  • Differentiated in the market
  • Sustainable
  • Pressure-tested in sales, in PR    

Content

  • An active blog
  • Webinars, videos, presentations, eBooks,
  • Lots of content addressing your prospects’ problems
  • Know which content is popular
  • A rich pipeline of content in development
  • Content for each stage of sales process

SEO

  • Know which terms
  • Prioritize based on popularity and difficulty
  • Build content targeting these terms

Case studies and References

  • Rich case study library for key markets, use cases, hard- and soft-metrics
  • Leverage in PR, web site, videos, webinars, live talks, reference calls   

Events

  • Identify key events
  • Exhibit and present at events
  • Measure ROI

Which social media sites matter

  • Identify key sites
  • Measure importance LinkedIn,  Facebook, Twitter, etc.
  • Understand conversion rates (traffic==>leads==>customers)

Connection with bloggers

  • Know the key bloggers
  • Engage with them as part of a community outreach program

Lead generation

  • Pilots to measure lead gen options
  • Understand cost/lead and cost/sale metrics
  • Know (and deliver) lead flow rate required by sales

Marketing metrics

  • Conversion rates for traffic, leads, customers
  • Understand how competitors compare with respect to traffic, inbound links, offers

Lead nurturing, lead intelligence

  • Understand buying process from customer’s perspective
  • Build content/tools/process that naturally pulls prospect along the sales stages
  • Leverage automation to streamline nurturing

Awards

  • Identify awards that matter (for companies, products, individuals, campaigns, etc.)
  • Develop your award program
  • Pitch, learn, refine

Sales

Mapping the Sales process

  • Know the buying process
  • Deliver content, tools, workflow, automation to support sales process

Prospecting tools and selection criteria

  • What are the key variables to use? (prospect title, company demographics, etc.)
  • Design an experiment to zero in on selection criteria?

Sales rep training

  • Develop onboarding process for new sales hires 
  • Certification 

Sales metrics

  • Know sales cycle, conversion rates at each stage
  • Know average sales price (ASP) and cost of customer acquisition (CoCA)
  • Know key productivity metrics per sales rep

Leveraging lead intelligence in Sales

  • Know prospects’ behavior on your site
  • Append demographic data from external sources
  • Automate lead grading and nurturing using available data

Cross-functional

Iterate & refine

  • Set Goals
  • Web site content, layout, navigation
  • Calls to
  • Sales/marketing infrastructure

Know where to step on gas

  • Lead generation programs
  • Cold-call list selection criteria
  • What roles to hire next

Infrastructure refinement

  • Marketing automation
  • Lead nurturing
  • Competitive intelligence
                                                                                              

Thoughts on other stuff to add to the "must achieve" checklist?

How a 9-Year Old Successfully Newsjacked the GOP Primary

 

describe the imageHere is my guest post on the HubSpot blog about how journalist Darren Garnick and his 9-year old son Ari did some very cool "newsjacking" recently. (According to marketing strategist David Meerman Scott, 'newsjacking' is "the process by which you inject your ideas or angles into breaking news, in real time, in order to generate media coverage for yourself or your business.")

Outlined in the post are several marketing lessons in both how the candidates answered, and in the propagation of the story itself, including:

And, when you're ready to newsjack, there's some great tips from David Meerman Scott.

describe the image

Refining Cold-Call Lists for High Velocity Sales Operations

 

VMTurbo Office resized 600Inside sales teams live and die by the quality of the leads they call on.  Though you hopefully you have an ever-growing amount of marketing-generated leads, chances are that if you’re in a growing business, it’ll feel like a never-ending treadmill, and you’ll be adding inside sales professionals at a faster rate than marketing can generate leads.  (Yes, there are the rare and notable exceptions – companies like HubSpot that have insane inbound lead volumes, or companies totally committed to the “freemium” model where inside sales reps focus 100% on upgrading downloaders of their free products.)

The economics of “high velocity” (a.k.a. “low friction”) sales models are particularly sensitive to the quality of cold-call lists, for the following reasons:

  1. The average sales deal is relatively small
  2. Sales expense makes up the majority of the cost of customer acquisition (CoCA)
  3. With cold-calling – even targeted cold-calling – you’re typically looking for a needle in a haystack, with somewhere over 99% of the calls not contributing to a sale.

As a result, success rests on that razor-thin difference in quality: a list that’s “99% bad” might drive profitable sales, whereas “99.9% bad” might be a giant waste of time, because the former has 1% quality leads, whereas the latter has 0.1%.

In this post, I’ll outline a technique for systematically zeroing in on the correct selection criteria.

Selection Criteria: Identify the variables

cold call list selection criteriaWhether you’re using list brokers, online sources like Hoovers or iSell, or online directories such as Jigsaw, ZoomInfo, and LinkedIn, there’s many sources for finding prospects to call on.  They all have pros and cons, but regardless of the source, everything starts from the selection criteria you use.  Though the exact criteria is going to be specific to your business, several obvious variables to consider are identified below.  And for each factor, identify the different levels you want to explore.  (After all,  you might not yet know  the right title, or the right company size.)  In the example below, let’s assume we have a software product that helps marketers, but we’re not yet sure the right title to call on, or the right company demographics.

  1. Prospect’s job title (and if there’s several common synonyms, you might consider grouping them all using a keyword in job title, job description, and/or experience).  For example: “marketing communications,” “product manager” or “public relations.”
  2. Prospect’s level in the organization – is this an individual contributor? A manager/director?  A C-level executive?  Many databases either provide this automatically, OR you can fairly easily group them in Excel, by looking for words like “manager” or “director”, or “vice president” and “chief”.
  3. Department: depending on whom you sell to, you might pick several groups (marketing, information technology, customer service)
  4. Company size (Revenue and/or # of Employees)
  5. Industry sector (e.g., healthcare, financial services, manufacturing, etc.)

Here, you might want to be careful just how many different levels for each variable you choose to test.  The more variables you have, with more levels, the larger the sample size you’ll need for the experiment (see below).

What to Measure

Though you might want to measure a bunch of factors, three typical ones to consider are:

  1. Conversion rate (to next stage in the sales process – e.g., a demo, or a “needs analysis” call)
  2. Deal close rate
  3. Average sales price

The benefit of measuring #1 is because it’s the quickest statistic you can get, as compared to #2 and #3, which require going through the full sales cycle (which not only results in a delay, but you may also need to have a sufficiently large sample size upfront, because each stage in the sales process is a reduction filter).  Though over time you’ll of course want to also measure the close rate and deal size, as that is the ultimate arbiter of quality and profitability.

Establish your baseline

If you already know what your average conversion rate is, or deal close rate is, great.  Unless you’re just starting out, you should have a decent sense of what those are.  And if not, it should be straightforward to calculate.  Just be careful of the sample bias: if a particular criteria dominates your sample size (e.g., you’ve primarily called on marketing communications professionals at healthcare companies with revenues between $50M and $250M) then your “baseline” might not be representative of the general population.  But the good news is, however biased your baseline might be to begin with, once you’ve designed the right experiment (below), you’ll definitely have a reliable average after you carry out the experiment.

Design your experiment

How many calls do you need to make?  How many samples do you need for each combination of variables?  This is an important question, and not one that can be answered trivially.  It depends on many factors, including:

  • How many variables are you testing?
  • How many levels for each variable?
  • What data is easily available from whatever database(s) you’re using?
  • How many sales reps do you have making calls?
  • What are the typical conversion rates?
  • How much precision and statistical significance are you looking for?
  • Is it ok to assume the factors are all independent?
  • Etc.

I’ll skip the rigorous statistics, and instead propose a couple rules of thumb:

  1. If you’re measuring something that happens infrequently (e.g., if the 1% of leads that convert to demos), then you might want to have at least several hundred names for each combination of variables.  After all, if you’re trying to see if there’s a difference between variable combination A and variable combination B, you need a large enough statistical sample to see a difference between, say, 0.8% and 1.5%.
  2. You don’t need to test every permutation.  For example: let’s say you had 3 different titles, 5 different industries, 4 different company size categories, and 5 different industries – that would be 300 different permutations.  If you populated each one with, say, 300 names, that would be 90,000 names in your experiment!  (First, good luck getting that list with all the variables, and second – that might keep your team busy for the next 6 months.)  The good news is that if you can make some simplifying assumptions (e.g., that the factors are largely independent), you can use something called fractional factorial design or orthogonal arrays to dramatically reduce the needed sample.  

But you don’t need to be a statistics wiz to carry out this experiment.  At the end of the day, just try to have at least a couple thousand names to call, and you might be in fine shape to get good insight into which factors matter.  After all, this is less about proving something in court, and more about quickly iterating to identify a good list selection criteria.

Execute the experiment

The neat thing with high velocity models is that you can do these sorts of experiments fairly quickly.  An inside sales rep should be able to make 100 calls per day.  If you have a team of five reps, that’s 2,500 calls your team will make this a week.  So within a week, a picture should start to come into view.

Analyze the data

As you carry out the experiment, compute the results along the way.  Do a sanity test and ask yourself a few questions such as:

  • Do the results make sense? Are they statistically significant?
  • Is there much variation across sales reps?
  • Based on the results so far, what might be the best selection criteria?

Along the way, you might decide to do more experimentation, further refinement, or to beef up the sample size.  Here’s a typical example from a cold-calling experiment, with some notes regarding each variable tested:
 

  • cold call list criteriaProspect title: looks like the “marketing communications” title performed best, with calls converting to demos almost twice as good as the baseline (2.2% vs 1.3%).
  • Department: looks like calling on prospects from Marketing was more productive than calling on IT or Customer Service teams.
  • Level in organization: in this example, we are looking at the conversion across the entire sales funnel, from the initial call to the sale.  Note the wide variation here: as we move from individual contributor to managers/directors, the conversion rate rises by almost 3.5X (from 0.20% to 0.68%), then falls off sharply as we call even higher up the organizational structure.  This might be the case because you might need to reach out to someone with budget authority; but as we call higher in an organization, we might reach someone with significantly broader responsibility, who is not looking closely enough at the kinds of challenges our software might be solving.  Of course, there’s other reasons why there might be a sharp drop-off as we call higher, so it’s possible you might want to do further investigation.  (For example, your connect rates might be a lot lower once you reach a management level that relies on administrative assistants and voicemail to screen calls.   Or, your sales pitch might not be tuned to the needs of the senior exec.) 
  • Company size: So given the results in this example, what companies should you target?  My sense is, it depends – on several factors that we’d probably want to look closer at.  On the one hand, it looks like as the company size increases, our average deal size increases.  On the other hand, we have not looked at other factors, including: How many calls (and related sales effort) did we have to make for each sale?  How many companies are there in each category?  Is our entire sales team capable of selling to the larger firms, or are just the best few reps able to make these sorts of sales?  Also, we can see that the ASP goes up for the largest firms, but not by much – so we’d probably want to pressure-test whether this is truly representative of the opportunity within larger firms, or just a reflection of our sales capability to sell to major accounts.  Depending on the answers, we might, for example, conclude that our sweet spot might be in the $50M-$250M company size – a large market segment, with a good ASP.

I should point out that there's a bunch of simplifying assumptions here.  For example, that the variables are all independent – in reality, it may turn out that some combinations are much rarer than others, and you can’t source them as easily.  Or that other factors that we might not be testing for – e.g., list accuracy – also matter.  Or that rather than there being one ideal target, there in fact may be several demographic profiles, or combination of variables, that represent great targets (cluster analysis might be a way to identify these – perhaps a topic for a future post).  But hopefully, over time, you can identify the variables and values that yield the best results, and identify more variables to test.  After all, the inside sales model is a fantastic laboratory for rapidly testing, refining, and improving not just your sales operation, but the entire customer life cycle - how you market, sell, and support customers.

Finally: I’d be glad to hear from you – which factors have you found particularly valuable when building cold-call lists?

Tags: ,

Building a Geographic Opportunity Model in 5 Minutes

 

map1Recently, I needed to quickly build a geographic opportunity model.  I've written in the past about some ideas for market sizing, and a couple of those ideas came in really handy - so I figured I'll outline the approach and provide a hands-on example.

In this particular case, we needed to load-balance a sales team across geographies, and wanted to get a sense of how to split up the territories, and get some insights into how to allocate sales efforts within those territories.   But the same sort of analysis can be applied to many other common questions:

  • How is our penetration in territory X compare to territory Y?
  • How is our success in industry X compare to industry Y?
  • How many prospects should we expect in territory X?
  • How many lead generation events should we do in California, versus New York, vs Connecticut?
  • etc.

Now, in the ideal world, you would have access to a database of the exact thing you need - for example, "companies in industry X, having revenues between $200M and $1B, with data centers of at least 50 servers."  But in the real world, you might not have exactly that.  In fact, you might not yet know the exact target demographic you even want.  So what to do?

Well, the good news is that it (almost) doesn't matter the exact metric to use! (This is related to the Law of Large Numbers and the Central Limit Theorem, though I won't get into that unless there's a comment asking for it!)  As I outline in more detail here, there's a multitude of free sources online - by industry, by state, by postal code, by products produced, by company size, etc.  (For example, here's info on gas stations across the U.S.)

To illustrate this point, let's take a specific example.  Let's say you're an enterprise software vendor, targeting a particular role within organizations above $100M in size.  And let's say you want to carve up the U.S. into 5 contiguous territories, with roughly the same opportunity - how might you do this, and how much confidence should you have in the model?

So consider 3 metrics:

  • # of Companies above $100M in revenue
  • # of Coffee shops
  • # of Pet Grooming salons
What?  I'm proposing sizing an enterprise software vendor's market opportunity by looking at the # of pet grooming salons?  YES!  The cool thing is that there's often a great correlation among seemingly independent quantities.  So if you didn't have one metric, and you pick a sufficiently close one, you'll be in good shape -- because even seemingly UNRELATED metrics are often highly correlated.  Let's take a look.  Consider the following - REAL - data of these 3 metrics, by state:

Turns out statistically, they are indeed highly correlated:

corr1

Another Example: Correlating Visitors, Leads, and GDP

As another real-life example, consider the following 3 metrics for a start-up company, with the key question being, "what is the relative market opportunity for each state?":

  1. # of Website visitors from each state
  2. # of leads from each state, generated via marketing activities
  3. the GDP (gross domestic product) of each state

corr2The interesting thing is that though this was a start-up with just a few months in the market, all 3 metrics are quite highly correlated.  If they weren't correlated, then we might wonder whether one of the metrics might over- or under-represent the opportunity.  But they are correlated - so using any one of them - or better yet, averaging several metrics - would likely give a reasonably accurate measure of the state's opportunity.

And you don't even need data across all 50 states (or all industrialized countries).  You can use this technique when you have just a couple data points.  For example, let's say that you want to roughly size the opportunity in Industry X (a market you have not yet entered), to see how it might be compared to Industry Y -- where you may already have a presence.  So find some metric that might roughly correlate with opportunity (e.g., annual revenues, or # of firms) of the two markets, and their ratio will give you a sense for how you might do in the new segment.

So next time some asks you to size a territory's opportunity, feel free to ask, "Sure - how many dog grooming shops does it have?"  :)

Guns N' Roses in Houston 11/4/2011

 

Just came back from an epic concert.  Last time I saw GNR was in 1988, opening for Aerosmith (my two favorite bands).  Needless to say, I was psyched to see GNR again.

Axl and company did not disappoint: his vocal range is still there, and the world-class musicians Axl assembled played the songs faithfully while adding their own flavor.  The 3-hr concert consisted of a rich 31-song set (summarized below), including interesting solos from everyone.

Though I didn't have my real concert gear, I did bring a point-and-shoot, and here's a few of the photos:

Set list:

  1. Chinese Democracy
  2. Welcome To The Jungle
  3. It's So Easy
  4. Mr. Brownstone
  5. Sorry
  6. Better
  7. Estranged
  8. Rocket Queen
  9. Richard Fortus Guitar Solo (James Bond Theme)
  10. Live and Let Die (Paul McCartney & Wings cover)
  11. This I Love
  12. Riff Raff (AC/DC cover)
  13. My Generation (The Who cover)
  14. Dizzy Reed Piano Solo (Baba O' Riley)
  15. Street Of Dreams
  16. You Could Be Mine
  17. DJ Ashba Guitar Solo (Mi Amor)
  18. Sweet Child O' Mine
  19. Instrumental Jam (Another Brick in The Wall Pt. 2)
  20. Axl Rose Piano Solo (Someone Saved My Life Tonight/Goodbye Yellow Brickroad)
  21. November Rain
  22. Bumblefoot Guitar Solo (Pink Panther Theme)
  23. Don't Cry
  24. Whole Lotta Rosie (AC/DC cover)
  25. Knockin' On Heaven's Door (Bob Dylan cover)
  26. Nightrain
  27. Madagascar
  28. Out Ta Get Me
  29. Patience
  30. Shackler's Revenge
  31. Paradise City



Tags: 
All Posts