Social-ecological organization is a multidimensional phenomenon that combines material and symbolic processes. However, the coupling between social and ecological subsystem is often conceptualized as purely material, thus reducing the symbolic dimension to its behavioral and actionable expressions. In this paper I conceptualize social-ecological systems as doubly coupled. On the one hand, material expressions of socio-cultural processes affect and are affected by ecological dynamics. On the other hand, coupled social-ecological material dynamics are concurrently coupled with subjective dynamics via coding, decoding, personal experience, and human agency. This second coupling operates across two organizationally heterogeneous dimensions: material and symbolic. Although resilience thinking builds on the recognition of organizational asymmetry between living and nonliving systems, it has overlooked the equivalent asymmetry between ecological and socio-cultural subsystems.
Three guiding concepts are proposed to formalize double coupling. The first one, social-ecological asymmetry, expands on past seminal work on ecological self-organization to incorporate reflexivity and subjectivity in social-ecological modeling. Organizational asymmetry is based in the distinction between social rules, which are symbolically produced and changed through human agents’ reflexivity and purpose, and biophysical rules, which are determined by functional relations between ecological components. The second guiding concept, conscious power, brings to the fore human agents’ distinctive capacity to produce our own subjective identity and the consequences of this capacity for social-ecological organization. The third concept, congruence between subjective and objective dynamics, redefines sustainability as contingent on congruent relations between material and symbolic processes. Social-ecological theories and analyses based on these three guiding concepts would support the integration of current structuralist-functionalist methods, which sufficiently and appropriately characterize ecological organization, with ethnographic and narrative methods exploring human intentionality, reflexivity, and biographical development.
Norwegian protected areas have historically been managed by central, expertise bureaucracy; however, a governance change in 2010 decentralized and delegated the right to manage protected areas to locally elected politicians and elected Sámi representatives in newly established National Park Boards. We explore how this new governance change affects adaptive capacity within the reindeer industry, as the reindeer herders are now participating with other users in decision-making processes related to large tracts of protected areas in which they have pasture access. Aspects within adaptive capacity and resilience thinking are useful as complementary dimensions to a social-ecological system framework (Ostrom 2007) in exploring the dynamics of complex adaptive social-ecological systems. The National Park Board provides a novel example of adaptive governance that can foster resilient livelihoods for various groups of actors that depend on protected areas. Data for this paper were gathered primarily through observation in National Park Board meetings, focus groups, and qualitative interviews with reindeer herders and other key stakeholders. We have identified certain aspects of the national park governance that may serve as sources of resilience and adaptive capacity for the natural system and pastoral people that rely on using these areas. The regional National Park Board is as such a critical mechanism that provides an action arena for participation and conflict resolution. However, desired outcomes such as coproduction of knowledge, social learning, and increased adaptive capacity within reindeer husbandry have not been actualized at this time. The challenge with limited scope of action in the National Park Board and a mismatch between what is important for the herders and what is addressed in the National Park Board become important for the success of this management model.
Objective: Survival time is an important type of outcome variable in treatment research. Currently, limited guidance is available regarding performing mediation analyses with survival outcomes, which generally do not have normally distributed errors, and contain unobserved (censored) events. We present considerations for choosing an approach, using a comparison of semi-parametric proportional hazards (PH) and fully parametric accelerated failure time (AFT) approaches for illustration.
Method: We compare PH and AFT models and procedures in their integration into mediation models and review their ability to produce coefficients that estimate causal effects. Using simulation studies modeling Weibull-distributed survival times, we compare statistical properties of mediation analyses incorporating PH and AFT approaches (employing SAS procedures PHREG and LIFEREG, respectively) under varied data conditions, some including censoring. A simulated data set illustrates the findings.
Results: AFT models integrate more easily than PH models into mediation models. Furthermore, mediation analyses incorporating LIFEREG produce coefficients that can estimate causal effects, and demonstrate superior statistical properties. Censoring introduces bias in the coefficient estimate representing the treatment effect on outcome—underestimation in LIFEREG, and overestimation in PHREG. With LIFEREG, this bias can be addressed using an alternative estimate obtained from combining other coefficients, whereas this is not possible with PHREG.
Conclusions: When Weibull assumptions are not violated, there are compelling advantages to using LIFEREG over PHREG for mediation analyses involving survival-time outcomes. Irrespective of the procedures used, the interpretation of coefficients, effects of censoring on coefficient estimates, and statistical properties should be taken into account when reporting results.
Megacities are major sources of anthropogenic fossil fuel CO2(FFCO2) emissions. The spatial extents of these large urban systems cover areas of 10 000 km2 or more with complex topography and changing landscapes. We present a high-resolution land–atmosphere modelling system for urban CO2 emissions over the Los Angeles (LA) megacity area. The Weather Research and Forecasting (WRF)-Chem model was coupled to a very high-resolution FFCO2 emission product, Hestia-LA, to simulate atmospheric CO2 concentrations across the LA megacity at spatial resolutions as fine as ∼ 1 km. We evaluated multiple WRF configurations, selecting one that minimized errors in wind speed, wind direction, and boundary layer height as evaluated by its performance against meteorological data collected during the CalNex-LA campaign (May–June 2010). Our results show no significant difference between moderate-resolution (4 km) and high-resolution (1.3 km) simulations when evaluated against surface meteorological data, but the high-resolution configurations better resolved planetary boundary layer heights and vertical gradients in the horizontal mean winds. We coupled our WRF configuration with the Vulcan 2.2 (10 km resolution) and Hestia-LA (1.3 km resolution) fossil fuel CO2 emission products to evaluate the impact of the spatial resolution of the CO2 emission products and the meteorological transport model on the representation of spatiotemporal variability in simulated atmospheric CO2 concentrations. We find that high spatial resolution in the fossil fuel CO2 emissions is more important than in the atmospheric model to capture CO2 concentration variability across the LA megacity. Finally, we present a novel approach that employs simultaneous correlations of the simulated atmospheric CO2 fields to qualitatively evaluate the greenhouse gas measurement network over the LA megacity. Spatial correlations in the atmospheric CO2 fields reflect the coverage of individual measurement sites when a statistically significant number of sites observe emissions from a specific source or location. We conclude that elevated atmospheric CO2 concentrations over the LA megacity are composed of multiple fine-scale plumes rather than a single homogenous urban dome. Furthermore, we conclude that FFCO2 emissions monitoring in the LA megacity requires FFCO2 emissions modelling with ∼1 km resolution because coarser-resolution emissions modelling tends to overestimate the observational constraints on the emissions estimates.
There is an increasing demand in higher education institutions for training in complex environmental problems. Such training requires a careful mix of conventional methods and innovative solutions, a task not always easy to accomplish. In this paper we review literature on this theme, highlight relevant advances in the pedagogical literature, and report on some examples resulting from our recent efforts to teach complex environmental issues. The examples range from full credit courses in sustainable development and research methods to project-based and in-class activity units. A consensus from the literature is that lectures are not sufficient to fully engage students in these issues. A conclusion from the review of examples is that problem-based and project-based, e.g., through case studies, experiential learning opportunities, or real-world applications, learning offers much promise. This could greatly be facilitated by online hubs through which teachers, students, and other members of the practitioner and academic community share experiences in teaching and research, the way that we have done here.
NeuroML is an XML-based model description language, which provides a powerful common data format for defining and exchanging models of neurons and neuronal networks. In the latest version of NeuroML, the structure and behavior of ion channel, synapse, cell, and network model descriptions are based on underlying definitions provided in LEMS, a domain-independent language for expressing hierarchical mathematical models of physical entities. While declarative approaches for describing models have led to greater exchange of model elements among software tools in computational neuroscience, a frequent criticism of XML-based languages is that they are difficult to work with directly. Here we describe two Application Programming Interfaces (APIs) written in Python (http://www.python.org), which simplify the process of developing and modifying models expressed in NeuroML and LEMS. The libNeuroML API provides a Python object model with a direct mapping to all NeuroML concepts defined by the NeuroML Schema, which facilitates reading and writing the XML equivalents. In addition, it offers a memory-efficient, array-based internal representation, which is useful for handling large-scale connectomics data. The libNeuroML API also includes support for performing common operations that are required when working with NeuroML documents. Access to the LEMS data model is provided by the PyLEMS API, which provides a Python implementation of the LEMS language, including the ability to simulate most models expressed in LEMS. Together, libNeuroML and PyLEMS provide a comprehensive solution for interacting with NeuroML models in a Python environment.
Computational models are increasingly important for studying complex neurophysiological systems. As scientific tools, it is essential that such models can be reproduced and critically evaluated by a range of scientists. However, published models are currently implemented using a diverse set of modeling approaches, simulation tools, and computer languages making them inaccessible and difficult to reproduce. Models also typically contain concepts that are tightly linked to domain-specific simulators, or depend on knowledge that is described exclusively in text-based documentation. To address these issues we have developed a compact, hierarchical, XML-based language called LEMS (Low Entropy Model Specification), that can define the structure and dynamics of a wide range of biological models in a fully machine readable format.
We describe how LEMS underpins the latest version of NeuroML and show that this framework can define models of ion channels, synapses, neurons and networks. Unit handling, often a source of error when reusing models, is built into the core of the language by specifying physical quantities in models in terms of the base dimensions. We show how LEMS, together with the open source Java and Python based libraries we have developed, facilitates the generation of scripts for multiple neuronal simulators and provides a route for simulator free code generation. We establish that LEMS can be used to define models from systems biology and map them to neuroscience-domain specific simulators, enabling models to be shared between these traditionally separate disciplines. LEMS and NeuroML 2 provide a new, comprehensive framework for defining computational models of neuronal and other biological systems in a machine readable format, making them more reproducible and increasing the transparency and accessibility of their underlying structure and properties.
Is the mirror neuron system (MNS) used in language understanding? According to embodied accounts of language comprehension, understanding sentences describing actions makes use of neural mechanisms of action control, including the MNS. Consequently, repeatedly comprehending sentences describing similar actions should induce adaptation of the MNS thereby warping its use in other cognitive processes such as action recognition and prediction. To test this prediction, participants read blocks of multiple sentences where each sentence in the block described transfer of objects in a direction away or toward the reader. Following each block, adaptation was measured by having participants predict the end-point of videotaped actions. The adapting sentences disrupted prediction of actions in the same direction, but (a) only for videos of biological motion, and (b) only when the effector implied by the language (e.g., the hand) matched the videos. These findings are signatures of the MNS.
This study is an attempt to use group information collected on climate change from farmers in eastern Uttar Pradesh, India to address a key question related to climate change policy: How to encourage farmers to adapt to climate change? First, we investigate farmers’ perception of and adaptation to climate change using content analysis and group information. The findings are then compared with climatic and agriculture information collected through secondary sources. Results suggest that though farmers are aware of long-term changes in climatic factors (temperature and rainfall, for example), they are unable to identify these changes as climate change. Farmers are also aware of risks generated by climate variability and extreme climatic events. However, farmers are not taking concrete steps in dealing with perceived climatic changes, although we find out that farmers are changing their agricultural and farming practices. These included changing sowing and harvesting timing, cultivation of crops of short duration varieties, inter-cropping, changing cropping pattern, investment in irrigation, and agroforestry. Note that these changes may be considered as passive response or adaptation strategies to climate change. Perhaps farmers are implicitly taking initiatives to adapt climate change. Finally, the paper suggests some policy interventions to scale up adaptation to climate change in Indian agriculture.
Modern software applications are commonly built by leveraging pre-fabricated modules, e.g. application programming interfaces (APIs), which are essential to implement the desired functionalities of software applications, helping reduce the overall development costs and time. When APIs deal with security-related functionality, it is critical to ensure they comply with their design requirements since otherwise unexpected flaws and vulnerabilities may consequently occur. Often, such APIs may lack sufficient specification details, or may implement a semantically-different version of a desired security model to enforce, thus possibly complicating the runtime enforcement of security properties and making it harder to minimize the existence of serious vulnerabilities. This paper proposes a novel approach to address such a critical challenge by leveraging the notion of software assertions. We focus on security requirements in role-based access control models and show how proper verification at the source-code level can be performed with our proposed approach as well as with automated state-of-the-art assertion-based techniques.