Because I have enormous respect for Ian David Moss from Fractured Atlas and the Create Equity blog, I am re-posting his entire recent blog with guest Ann Markusen. If you have any interest in the future of creative placemaking this is a must read. Ms. Markusen is a leading researcher and thought leader in this field. Below is Ian's latest Create Equity posting.
(If you don’t know the name Ann Markusen, you should. As professor and director of the Project on Regional and Industrial Economics
at the University of Minnesota Humphrey School of Public Affairs, Ann
has become one of the most respected and senior voices in the arts
research community over the past decade. Among her best-known recent
efforts was her authorship, with Anne Gadwa Nicodemus, of the original Creative Placemaking white paper
published by the NEA prior to the creation of the Our Town grant
program and ArtPlace funder collaborative. So when she approached me to
offer a guest post on evaluation challenges for creative placemaking,
building on previous coverage of the topic
here at Createquity, I could hardly say no. I hope you enjoy Ann’s
piece and I look forward to the vigorous discussion it will no doubt
spark. -IDM)
Creative placemaking is electrifying communities large and small
around the country. Mayors, public agencies and arts organizations are
finding each other and committing to new initiatives. That’s a wonderful
thing, whether or not their proposals are funded by national
initiatives such as the National Endowment for the Arts’s Our Town program or ArtPlace.
It’s important to learn from and improve our practices on this new
and so promising terrain. But efforts based on fuzzy concepts and
indicators designed to rely on data external to the funded projects are
bound to disappoint. Our evaluative systems must nurture rather than
discourage the marvelous moving of arts organizations, artists and arts
funders out of their bunkers and into our neighborhoods as leaders,
animators, and above all, exhibitors of the value of arts and culture.
In our 2010 Creative Placemaking white paper for the NEA,
Anne Gadwa Nicodemus and I characterize creative placemaking as a
process where “partners… shape the physical and social character of a
neighborhood, town, city, or region around arts and cultural
activities.” A prominent ambition, we wrote, is to “bring diverse people
together to celebrate, inspire, and be inspired.” Creative placemaking
also “animates public and private spaces, rejuvenates structures and
streetscapes, (and) improves local business viability and public
safety,” but arts and culture are at its core. This definition
suggests a number of distinctive arenas of experimentation, where the
gifts of the arts are devoted to community liveliness and collaborative
problem-solving and where new people participate in the arts and share
their cultures.
And, indeed, Our Town and ArtPlace encourage precisely this experimental ferment. Like the case studies in Creative Placemaking,
each funded project is unique in its artistic disciplines, scale,
problems addressed and aspirations for its particular place. Thus, a
good evaluation system will monitor the progress of each project team
towards its stated goals, including revisions made along the way. NEA’s
Our Town asks grant-seekers to describe how they intend to evaluate
their work, and ArtPlace requires a monthly blog entry. But rather than
more formally evaluate each project’s progress over time, both funders
have developed and are compiling place-specific measures based on
external data sources that they will use to gauge success: the Arts and Livability Indicators in the case of the NEA, and what ArtPlace is calling its Vibrancy Indicators.
Creative placemaking funders are optimistic about these efforts and their usefulness. “Over the next year or two,” wrote Jason Schupbach,
NEA’s Director of Design, last May, “we will build out this system and
publish it through a website so that anyone who wants to track a
project’s progress in these areas (improved local community of artists
and arts organizations, increased community attachment, improved quality
of life, invigorated local economies) will be able to do so, whether it
is NEA-funded or not. They can simply enter the time and geography
parameters relevant to their project and see for themselves.”
Over the past two years, I have been consulting with creative
placemaking leaders and given talks to audiences in many cities and
towns across the country and abroad. Increasingly, I am hearing distress
on the part of creative placemaking practitioners about the indicator
initiatives of the National Endowment for the Arts and ArtPlace. At the
annual meetings of the National Alliance for Media Arts and Culture last
month, my fellow Creative Placemaking panel members, all involved in
one or more ArtPlace- or Our-Town-funded projects, expressed
considerable anxiety and confusion about these indicators and how they
are being constructed. In particular, many current grantee teams with
whom I’ve spoken are baffled by the one-measure-fits-all nature of the
indicators, especially in the absence of formal and case-tailored
evaluation.
I’ll confess I’m an evidence gal. I fervently believe in numbers
where they are a good measure of outcomes; in secondary data like Census
and the National Center for Charitable Statistics where they are up to
the task; in surveys where no such data exist; in case studies to
illuminate the context, process, and the impacts people tangibly
experience; in interviews to find out how actors make decisions and view
their own performance. My own work over the past decade is riddled with examples of these practices, including appendices intended to make the methodology and data used as transparent as possible.
So I embrace the project of evaluation, but am skeptical of relying
on indicators for this purpose. In pursuing a more effective course, we
can learn a lot from private sector venture capital practices, the ways
that foundations conduct grantee evaluations, and, for political
pitfalls, defense conversion placemaking experiments of the 1990s.
Learning from Venture Capital and Philanthropy
How do private sector venture capital (VC) firms evaluate the
enterprises they invest in? Although they target rates of return in the
longer run, they not do resort to indicators based on secondary data to
evaluate progress. They closely monitor their investees—small firms who
often have little business experience, just as many creative placemaking
teams are new to their terrain. VC firms play an active role in guiding
youthful companies, giving them feedback germane to their product or
service goals. They help managers evaluate their progress and bring in
special expertise where needed.
Venture capital firms are patient, understanding realistic timelines.
The rule of thumb is that they commit to five to seven years, though it
may be less or more. Among our Creative Placemaking cases, few efforts succeeded in five years, while some took ten to fifteen years.
VC firms know that some efforts will fail. They are attentive to
learning from such failures and sharing what they learn in generic form
with the larger business community. Both ArtPlace and the NEA have
stated their desire to learn from success and failure. Yet generic
indicators, their chosen evaluation tools, are neither patient or
tailored to specific project ambitions. Current Our Town and ArtPlace
grant recipients worry that the 1-2 years of funding they’re getting
won’t be enough to carry projects through to success or establish enough
local momentum to be self-sustaining. Neither ArtPlace nor Our Town
have a realistic exit strategy in place for their investments, other
than “the grant period’s over, good luck!”
Hands-on guidance is not foreign to nonprofit philanthropies funding
the arts. Many arts program officers act as informal consultants and
mentors to young struggling arts organizations and to mature ones facing
new challenges. My study with Amanda Johnson of Artists’ Centers shows
how Minnesota funders have played such roles for decades. They ask
established arts executive directors to mentor new start-ups, a process
that the latter praised highly as crucial to their success. The Irvine
and Hewlett Foundations are currently funding California nonprofit
intermediaries to help small, folk and ethnic organizations use grant monies wisely.
They also pay for intermediaries across sectors (arts and culture,
health, community development and so on) to meet together to learn what
works best.
The NEA has hosted three webinars at which Our Town panelists talk
about what they see as effective projects/proposals, a step in this
direction. But these discussions are far from a systematic gathering and
collating of experience from all grantees in ways that would help the
cohorts learn and contact those with similar challenges.
The Indicator Impetus
Why are the major funders of creative placemaking staking so much on
indicators rather than evaluating projects on their own aspirations and
steps forward? Pressure from the Office of Management and Budget, the
federal bean-counters, is one factor. In January of 2011, President
Obama signed into law the Government Performance and Modernization Act
(GPRA), updating the original 1993 GPRA, and a new August 2012 Circular A11 heavily emphasizes use of performance indicators for all agencies and their programs.
As a veteran of research and policy work on scientific and
engineering occupations and on industrial sectors like steel and the
military industrial complex, I fear that others will perceive indicator
mania as a sign of field weakness. To Ian David Moss’s provocative title
“Creative Placemaking has an Outcomes Problem,”
I’d reply that we’re in good company. Huge agencies of the federal
government, like the National Science Foundation, the National
Institutes of Health and NASA, fund experiments and exploratory
development without asking that results be held up to some set of
external indicators not closely related to their missions. They accept
slow progress and even failure, as in cancer research or nuclear fusion,
because the end goal is worthy and because we learn from failure.
Evaluation by external generic indicators fails to acknowledge the
experimental and ground-breaking nature of these creative-placemaking
initiatives and misses an opportunity to bolster understanding of how
arts and cultural missions create public value.
Why Indicators Will Disappoint I: Definitional Challenges
Many of the indicators charted in ArtPlace, NEA Our Town, and other exercises (e.g. WESTAF’s Creative Vitality Index)
bear a tenuous relationship to the complex fabric of communities or
specific creative placemaking initiatives. Terms like “vitality,”
“vibrancy,” and “livability” are great examples of fuzzy concepts, a
notion that I used a decade ago
to critique planners and geographers’ enamoration with concepts like
“world cities” and “flexible specialization.” A fuzzy concept is one
that means different things to different people, but flourishes
precisely because of its imprecision. It leaves one open to trenchant
critiques, as in Thomas Frank’s recent pillorying of the notion of vibrancy.
Take livability, for instance, prominent in the NEA’s indicators project.
One person’s quality of life can be inimical to others’. Take the young
live music scene in cities: youth magnet, older resident nightmare.
Probably no worthy concept, as quality of life is, has been the subject
of so many disappointing and conflicting measurement exercises.
Just what does vibrancy mean? Let’s try to unpack the term. ArtPlace’s definition:
“we define vibrancy as places with an unusual scale and intensity of
specific kinds of human interaction.” Pretty vague and….vibrancy are
places? Unusual scale? Scale meaning extensive, intensive? Of specific
kinds? What kinds? This definition is followed by: “While we are not
able to measure vibrancy directly, we believe that the measures we are
assembling, taken together, will provide useful insights into the nature
and location of especially vibrant places within communities.” If I
were running a college or community discussion session on this, I would
put the terms “vibrancy, places, communities, measures,” and so on up on
the board (so to speak), and we would undoubtedly have a spirited and
inconclusive debate!
And what is the purpose of measuring vibrancy? Again from the same
ArtPlace LOI: “…the purpose of our vibrancy metrics is not to pronounce
some projects ‘successes’ and other projects ‘failures’ but rather to
learn more about the characteristics of the projects and community
context in which they take place which leads to or at least seems
associated with improved places.” Even though the above description
mentions “characteristics of the projects,” it’s notable that their
published vibrancy indicators only measure features of place.
In fact, many of the ArtPlace and NEA indicators are roughly designed
and sometime in conflict. While giving the nod to “thriving in place,”
ArtPlace emphasizes the desirability of visitors in its vibrancy
definition (meaning outsiders to the community); by contrast, the NEA
prioritizes social cohesion and community attachment, attributes scarce
in the ArtPlace definitions. For instance, ArtPlace proposes to use
employment ratio—“the number of employed residents living in a
particular geography (Census Block) and dividing that number by the
working age persons living on that same block” as a measure of
people-vibrancy. The rationale: “vibrant neighborhoods have a high
fraction of their residents of working age who are employed.” Think of
the large areas of new non-mixed use upscale high-rise condos where the
mostly young professional people living there commute daily to jobs and
nightly to bars and cafes outside the neighborhood. Not vibrant at all.
But such areas would rank high using this measure.
ArtPlace links vibrancy with diversity, defined as heterogeneity of
people by income, race and ethnicity. They propose “the racial and
ethnic diversity index” (composition not made explicit) and “the
mixed-income, middle income index” (ditto) to capture diversity. But
what about age diversity? Shouldn’t we want intergenerational activity
and encounters too? It is also problematic to prioritize the dilution of
ethnicity in large enclaves of recent immigrant groups. Would a
thriving heavily Vietnamese city or suburb be considered non-vibrant
because its residents choose to live and build their cultural
institutions there, facing discrimination in other housing markets?
Would an ethnic neighborhood experiencing white hipster incursions be
evaluated positively despite decline in its minority populations that
result from lower income people being forced out?
Many of the NEA’s indicators are similarly fuzzy. As an indicator of impact on art communities and artists, its August 2012 RFP proposes
median earnings for residents employed in entertainment-related industries (arts, design, entertainment, sports, and media occupations).
But a very large number of people in these occupations are in sports
and media fields, not the arts. The measure does not include artists who
live outside the area but work there. And many artists self-report
their industry as other than the one listed above, e.g. musicians work
in the restaurant sector, and graphic artists work in motion pictures,
publishing and so on. ArtPlace is proposing to use very similar
indicators—creative industry jobs and workers in creative occupations—as
measures of vibrancy.
It is troubling that neither indicator-building effort has so far
demonstrated a willingness to digest and share publicly the rich,
accessible, and cautionary published research that tackles many of these
definitions. See for instance “Defining the Creative Economy: Industry and Occupational Approaches,”
the joint effort by researchers Doug DeNatale and Greg Wassall from the
New England Creative Economy Project, Randy Cohen of Americans for the
Arts, and me at the Arts Economy Initiative to unpack the definitional
and data challenges for measuring arts-related jobs and industries in Economic Development Quarterly.
Hopefully, we can have an engaging debate about these notions before
indices are cranked out and disseminated. Heartening signs: in its
August RFP, the NEA backtracks from its original plan, unveiled in a
spring 2012 webinar, to contract for wholesale construction of a given
set of indicators to be distributed to grantees. Instead, it is now
contracting for the testing of indicator suitability by conducting
twenty case studies. And just last week, the NEA issued a new RFP for developing a virtual storybook to document community outcomes, lessons learned and experiences associated with their creative placemaking projects.
Why Indicators Will Disappoint II: Dearth of Good Data
If definitional problems aren’t troubling enough, think about the
sheer inadequacy of data sources available for creating place-specific
indicators.
For more than a half-century, planning and economic development
scholars have been studying places and policy interventions to judge
success or failure. Yet when Anne Gadwa Nicodemus went in search of
research results on decades of public housing interventions, assuming
she could build on these for her evaluation of Artspace Projects’ artist
live/work and studio buildings, she found that they don’t really exist.
Here are five serious operational problems confronting creative
placemaking indicator construction.
First, the dimensions to be measured
are hard to pin down. Some of the variables proposed are quite
problematic—they don’t capture universal values for all people in the
community.
Take ArtPlace’s cell phone activity indicator,
for instance, which will be used on nights and weekends to map where
people congregate. Are places with cell activity to be judged as more
successful at creative placemaking? Cell phone usage is heavily
correlated with age, income and ethnicity. The older you are, the less
likely you are to have a cell phone or use it much, and the more likely
to rely on land-lines, which many young people do without. At the
November 2012 American Collegiate Schools of Planning annual meetings,
Brettany Shannon of University of Southern California presented research
results from a survey of 460 LA bus riders showing low cell phone usage
rates among the elderly, particularly Latinos. Among those aged 18-30,
only 9% of English speakers and 15% of Spanish speakers had no cell
phone, compared with 29% of English speakers over age 50 and 54% of
Spanish speakers. A cell phone activity measure is also likely to
completely miss people attending jazz or classical music concerts,
dramas, and religious cultural events where cell phones are turned off.
And what about all those older folks who prefer to sit in coffee shops
and talk to each other during the day, play leadership roles in the
community through face-to-face work, or meet and engage in arts and
cultural activities around religious venues? Aren’t they congregating,
too?
Or take home ownership and home values, an indicator the NEA hopes to
use. Hmmm… home ownership rates—and values—in the US have been falling,
in large part due to overselling of homes during the housing bubble.
Renting is a just as respectable an option for place lovers, especially
young people, retirees, and lower-income people in general. Why would we
want grantees to aspire to raise homeownership rates in their
neighborhoods, especially given gentrification concerns? Home ownership
does not insulate you against displacement, because as property values
rise, property taxes do as well, driving out renters and homeowners
alike on fixed or lower incomes. ArtPlace is developing “measures of
value, which capture changes in rental and ownership values…” This reads
like an invitation to gentrification, and contrary to the NEA’s
aspirations for creative placemaking to support social cohesion and
community attachment.
Second, most good secondary data series are not available at spatial
scales corresponding to grantees’ target places. ArtPlace’s vibrancy
exercise aspires to compare neighborhoods with other neighborhoods, but
available data makes this task almost impossible to accomplish at highly
localized scales. Some data points, like arts employment by industry,
are available only down to the county level and only for more heavily
populated counties because of suppression problems (and because they are
lumped together with sports and media in some data sets). Good data on
artists from the Census (Public Use Microdata Sample) and American
Community Surveys, the only database that includes the self-employed and
unemployed, can’t be broken down below PUMA (Public Use Microdata
Areas) of 100,000 people that bear little relationship to real
neighborhoods or city districts (see Crossover, where we mapped artists using 2000 PUMS data for the Los Angeles and Bay Area metros).
Plus, many creative placemaking efforts have ambitions to have an impact at multiple scales. Gadwa Nicodemus’s pioneering research studies, How Artist Space Matters and How Art Spaces Matter II, looked
in hindsight at Artspace’s artist live/work and mixed use projects
where the criteria for success varied widely between projects and for
various stakeholders involved in each. Artists, nonprofit arts
organizations, and commercial enterprises (e.g. cafes) in the buildings
variously hoped that the project would an impact on the regional arts
community, neighborhood commercial activity and crime rates, and local
property values. The research methods included surveys and interviews
exploring whether the goals of the projects have been achieved in the
experience of target users. Others involve complex secondary data
manipulation to come up with indicators that are a good fit. Gadwa
Nicodemus’s studies demonstrate how much work it is to document real
impact along several dimensions, multiple spatial scales, and a long
enough time periods to ensure a decent test. Her indicators, such as
hedonic price indices to gauge area property value change, are
sophisticated, but also very time- and skill-intensive to construct.
Third, even if you find data that address what you hope to achieve,
they are unlikely be statistically significant at the scales you hope
for. In our work with PUMS data from the 2000 Census, a very reliable 5%
sample, we found we could not make reliable estimates of artist
populations at anything near a neighborhood scale. To map the location
of artists in Minneapolis, we had to carve the city into three segments
based on PUMA lines, and even then, we were pushing the statistical
reliability hard (Artists’ Centers, Figure 3, p. 108).
Some researchers are beginning to use the American Community Survey, a
1% sample much smaller than the decennial Census PUMS 5%, to build
local indicators, heedless of this statistical reliability challenge.
ArtPlace, for instance, is proposing to use ACS data to capture workers
in creative occupations at the Census Tract level. See the statistical
appendix to Leveraging Investments in Creativity (LINC)’s Creative Communities Artist Data User Guide for
a detailed explanation of this problem. Adding the ACS up over five
years, one way of improving reliability, is problematic if you are
trying to show change over a short period of time, which the creative
placemaking indicators presumably aspire to do.
Fourth, charting change over time successfully is a huge challenge. ArtPlace intends to
“assess the level of vibrancy of different areas within communities,
and importantly, to measure changes in vibrancy over time in the
communities where ArtPlace invests.” How can we expect projects that
hope to change the culture, participation, physical environment and
local economy to show anything in a period of one, two, three years?
More ephemeral interventions may only have hard-to-measure impacts in
the year that they happen, even if they catalyze spinoff activities,
while the potentially clearer impact of brick-and-mortar projects may
take years to materialize.
We know from our case studies and from decades of urban planning and
design experience that changes in place take long periods of time. For
example, Cleveland’s Gordon Square Arts District, a case study in Creative Placemaking,
required at least five years for vision and conversations to translate
into a feasibility study, another few years to build the streetscape and
renovate the two existing shuttered theatres, and more to build the new
one.
Because it’s unlikely that the data will be good enough to chart
creative placemaking projects’ progress over time, we are likely to see
indicators used in a very different and pernicious way – to compare
places with each other in the current time period. But every creative
placemaking initiative is very, very different from others, and their
current rankings on these measures more apt to reflect long-time
neighborhood evolution and particularities rather than the impact of
their current activities. I can just see creative placemakers viewing
such comparisons and throwing their hands up in the air, shouting,
“but.. but…but, our circumstances are not comparable!”
One final indicator challenge. As far as I can tell, there are very
few arts and cultural indicators included among the measures under
consideration. Where is the mission of bringing diverse people together
to celebrate, inspire, and be inspired? Shouldn’t creative placemaking
advance the intrinsic values and impact of the arts? Heightened and
broadened arts participation? Preserving cultural traditions? Better
quality art offerings? Providing beauty, expression, and critical
perspectives on our society? Are artists and arts organizations whose
greatest talents lie in the arts world to be judged only on their impact
outside of this core? Though arts participation is measurable, many of
the these “intrinsic” outcomes are challenging data-wise, just as are
many of the “instrumental’ outcomes given central place in current
indicator efforts. WolfBrown now offers a website that
aims to “change the conversation about the benefits of arts
participation, disseminate up-to-date information on emerging practices
in impact assessment, and encourage cultural organizations to embrace
impact assessment as standard operating practice.”
The Political Dangers of Relying on Indicators
I fear three kinds of negative political responses to reliance on
poorly-defined and operationalized indicators. First, it could be
off-putting to grantees and would-be grantees, including mayors, arts
organizations, community development organizations and the many other
partners to these projects. It could be baffling, even angering, to be
served up a book of cooked indicators with very little fit to one’s
project and aspirations and to be asked to make sense out of them. The
NEA’s recent RFP calls for the development of a user guide with some
examples, which will help. Those who have expressed concern report
hearing back something like “don’t worry about it – we’re not going to
hold you to any particular performance on these. They are just
informational for you.” Well, but then why invest in these indicators if
they aren’t going to be used for evaluation after all?!
Second, creative placemaking grants create competitors, and that
means they are generating losers as well as winners. Some who aren’t
funded the first time try again, and some are sanguine and grateful that
they were prompted to make the effort and form a team. But some will
give up. There are interesting parallels with place-based innovations in
the 1990s. The Clinton administration’s post Cold War defense
conversion initiatives included the Technology Reinvestment Project, in
which regional consortia competed for funds to take local military
technologies into the civilian realm. As Michael Oden, Greg Bischak and
Chris Evans-Klock concluded in our 1995 Rutgers study (full report
available from the authors on request), the TRP failed after just a few
years because Members of Congress heard from too many disgruntled
constituents. In contrast, the Manufacturing Extension Partnership, begun
in the same period and administered by NIST, has survived because after
its first exploratory rounds, it partnered with state governments to
amplify funding for technical assistance to defense contractors
struggling with defense budget implosion everywhere. States, rather than
projects, then competed, eager for the federal funds.
Third, and most troubling, funders may begin favoring grants to
places that already look good on the indicators. Anne Gadwa Nicodemus
raised this in her GIA Reader article on creative placemaking last spring. ArtPlace’s own funding criteria
suggest this: “ArtPlace will favor investments… and sees its role as
providing venture funding in the form of grants, seeding entrepreneurial
projects that lead through the arts and already enjoy strong local
buy-in and will occur at places already showing signs of momentum….” Imagine
how a proposal to convert an old school in a very low income and
somewhat depopulated, minority neighborhood into an artist live/work,
studio and performance and learning space would stack up against a
proposal to add funding to a new outreach initiative in an area already
colonized by young people from elsewhere in the same city. A funder
might be tempted to fund the latter, where vibrancy is already
indicated, over the other, where the payoff might be much greater but
farther down the road.
In an Ideal World, Sophisticated Models
In any particular place, changes in the proposed indicators will not
be attributable to the creative placemaking intervention alone. So
imagine the distress of a fundee whose indicators are moving the wrong
way and which place them poorly in comparison to others. Area property
values may be falling because an environmentally obnoxious plant starts
up. Other projects might look great on indicators not because of their
initiatives, but because another intervention, like a new light rail
system or a new community-based school dramatically changes the
neighborhood.
What we’d would love to have, but don’t at this point, are
sophisticated causal models of creative placemaking. The models would
identify the multiple actors in the target place and take into account
the results of their separate actions. A funded creative placemaking
project team would be just one such “actor” among several (e.g. real
estate developers, private sector employers, resident associations,
community development nonprofits and so on).
A good model would account for other non-arts forces at work that
will interact with the various actors’ initiatives and choices. This is
crucial, and the logic models proposed by Moss, Zabel and others don’t
do it. Scholars of urban planning well know how tricky it is to isolate
the impact of a particular intervention when there are so many others
occurring simultaneously (crime prevention, community development,
social services, infrastructure investments like light rail or street
repaving).
Furthermore, models should be longitudinal, i.e. they will chart
progress in the particular place over time, rather than comparing one
place cross-sectionally with others that are quite unlikely to share the
same actors, features and circumstances. If we create models that are
causal, acknowledge other forces at work, and are applied over time,
“we’ll be able to clearly document the critical power of arts and
culture in healthy community development,” reflects Deborah Cullinan of
San Francisco’s Intersection for the Arts in a followup to our NAMAC
panel.
Such multivariate models, as social scientists and urban planners
call them, lend themselves to careful tests of hypotheses about change.
We can ask if a particular action, like the siting of an interstate
highway interchange or adding a prison or being funded in a federal
program like the Appalachian Regional Commission, produces more
employment or higher incomes or better quality of life for its host city
or neighborhood when compared with twin or comparable places, as Andrew
Isserman and colleagues have done in their “quasi-experimental” work
(write me for a summary of these, soon to be published).
We can also run tests to see if differentials in city and regional
arts participation rates and presence of arts organizations can be
explained by differences in funding, demographics, or features of local
economies. My teammates and I used Cultural Data Project and National
Center for Charitable Statistics data on nonprofit arts organizations in
California to do this for all California cities with more than 20,000 residents.
Our results, while cross-sectional, suggest that concerted arts and
culture-building by local Californians over time leads to higher arts
participation rates and more arts offerings than can be explained by
other factors. The point is that techniques like these DO take into
account other forces (positive and negative) operating in the place
where creative placemaking unfolds.
Charting a Better Path
It’s understandable why the NEA and ArtPlace are turning to
indicators. Their budgets for creative placemaking are relatively small,
and they’d prefer to spend them on more programming and more places
rather than on expensive, careful evaluations. Nevertheless, designing
indicators unrelated to specific funded projects seems a poor way
forward. Here are some alternatives.
Commit to real evaluation. This need not be as
expensive as it seems. Imagine if the NEA and ArtPlace, instead of
contracting to produce one-size-fits-all indicators, were to design a
three-stage evaluation process. Grantees propose staged criteria for
success and reflect on them at specified junctures. Funding is awarded
on the basis of the appropriateness of this evaluative process and
continued on receipt of reflections. Funders use these to give feedback
to the grantee and retool their expectations if necessary, and to
summarize and redesign overall creative placemaking achievements. This
is more or less what many philanthropic foundations do currently and
have for many years, the NEA included. Better learning is apt to emerge
from this process than from a set of indicator tables and graphics.
ArtPlace is well-positioned to draw on the expertise of its member
foundations in this regard.
Build cooperation among grantees to soften the edge of competition for funds.
Convene grantees and would-be grantees annually to talk about success,
failures, and problems. Ask successful grantees to share their
experience and expertise with others who wish to try similar projects
elsewhere. During Leveraging Investments in Creativity’s ten-year
lifespan, it convened its creative community leaders annually and
sometimes more often, resulting in tremendous cross-fertilization that
boosted success. Often, what was working elsewhere turned out to be a
better mission or process than what a local group had planned. Again,
ArtPlace in particular could create a forum for this kind of cooperative
learning. And, as mentioned, NEA’s webinars are a step in the right
direction. Imagine, notes my NAMAC co-panelist Deborah Cullinan of
Intersection for the Arts, if creative placemaking funders invested in
cohort learning over time, with enough longevity to build relationships,
share lessons, and nurture collaborations.
Finally, the National Endowment for the Arts and ArtPlace could provide technical assistance to creative placemaking grantees, as the Manufacturing Extension Partnership does
for small manufacturers. Anne Gadwa Nicodemus and I continually receive
phone calls from people across the country psyched to start projects
but needy of information and skills on multiple fronts. There are
leaders in other communities, and consultants, too, who know how
creative placemaking works under diverse circumstances and who can form a
loose consortium of talent: people who understand the political
framework, the financial challenges, and the way to build partnerships.
Artspace Projects, for instance, has recently converted over a quarter
century of experience with more than two -dozen completed artist and
arts-serving projects into a consultancy to help people in more places
craft arts-based placemaking projects.
Wouldn’t it be wonderful if, in a few years’ time, we could say,
look! Here is the body of learning and insights we’ve compiled about
creative placemaking–how to do it well, where the diverse impacts are,
and how they can be documented. With indicators dominating the
evaluation process at present, we are unlikely to learn what we could
from these young experiments. An indicators-preoccupied evaluation
process is likely to leave us disappointed, with spreadsheets and charts
made quickly obsolete by changing definitions and data collection
procedures. Let’s think through outcomes in a more grounded, holistic
way. Let’s continue, and broaden, the conversation!
No comments:
Post a Comment