1 - Priority 1: Launch a Minimum Viable Product (MVP)

We want to give potential users confidence that they can appropriately apply readyforwhatsnext to their decision problems by bringing all our existing development release and unreleased software to production release status.

Why?

All our software, regardless of status is supplied without any warranty. However, our views about whether an item of software is potentially appropriate for others to use in undertaking real world analyses can be inferred from its release status. If it is not a production release, we probably believe that it needs more development and testing and better documentation before it can be used for any purpose other than the specific studies in which we have already applied it. Partly for this reason, it is unlikely that any item of our software will be widely adopted until it is available as a production release. We also cannot meaningfully track uptake of our software until it becomes available in a dedicated production release CRAN. We also need a critical mass of model modules available as production releases so that they can be combined to model moderately complex systems.

What?

Bringing an initial set of development version and pipeline libraries to production release, will constitute the launch ofa readyforwhatsnext Minimum Viable Product (MVP). The MVP will comprise an initial skeleton of production ready modules for modelling people, places, platforms and programs.

The most important types of help we need with achieving this goal are funding, code contributions, community support and advice.

How?

The main tasks to be completed to bring all of our existing code libraries to production releases is as follows:

  1. (For unreleased software) Address all issues preventing public release of code repositories (e.g. fixing errors preventing core functions working, removing all traces of potentially confidential artefacts from all versions/branches of repository, etc.).

  2. (For code libraries are implemented using only the functional programming paradigm) Author and test new modules.

  3. Write / update unit tests (tests of individual functions / modules for multiple potential uses / inputs that will be automatically run every time a new version of a library is pushed to the main branch of its development release repository).

  4. Enhance the documentation that is automatically authored by algorithms from the ready4 framework we use to author readyforwhatsnext. This will involve some or all of:

  • minor modifications of function names / arguments / code;

  • updating the taxonomies datasets used in the documentation writing algorithm; and/or

  • updating the documentation authoring algorithms (within the ready4fun and ready4class packages).

  1. Adding human authored documentation for the modules contained in each library.

  2. (For some libraries) Adding a user-interface.

When?

Our production releases will be submitted to the Comprehensive R Archive Network (CRAN). CRAN does not allow for submitted R packages to depend on development version R packages, so the dependency network of our code-libraries shapes the sequence in which we bring them to production releases.

How quickly we can launch production releases of all our code depends on how much and what type of help we get. Working within our current resources we expect the first of the 23 libraries listed to be released during late 2024 and the last during late 2026. With your help this release schedule can be sped up.

2 - Priority 2: Maintain readyforwhatsnext

We want the readyforwhatsnext to continually improve and update in response to the needs of potential users and stakeholders.

Why?

A significant limitation of many health economic models is that they are not updated and can become progressively less valid with time. The importance of maintaining a computational model increases if, like readyforwhatsnext, it is intended to have multiple applications and users. As we progressively make production releases to launch the MVP model, we intend that people will start using it. As readyforwhatsnext becomes more widely used, its limitations (errors, bugs, restrictive functionality and confusing / inadequate documentation) are more likely to become exposed and to require remediation. Addressing such issues needs to implemented skillfully and considerately to avoid unintended consequences on existing model users (e.g. to ensure software edits to fix one problem do not prevent previously written replication code or downstream dependencies from executing correctly). Open source projects like readyforwhatsnext also need to make changes in response to decisions by third parties - such as edits to upstream dependencies and changes in the policies of hosting repositories and to update citation / acknowledgement information to appropriately reflect new contributors.

What?

All readyforwhatsnext model software needs to be maintained and updated to identify and fix bugs, enhance functionality and usability, respond to changes in upstream dependencies and to conscientiously deprecate outdated code. Open access datasets made available for use in modelling analyses need to be actively curated to ensure they remain relevant to current decision contexts. Decision aids need to be reviewed and updated to ensure they continue to use the most up to date and appropriate modules and input data.

The most important types of help we need with this priority area are funding, code contributions, community support and advice.

How?

The main tasks for the maintenance of framework and model software are to:

  1. Appropriately configure and update the settings of the ready4 GitHub organisation and its constituent repositories to facilitate easy to follow and efficient maintenance workflows.

  2. Proactively:

  • author ongoing improvements to software testing, documentation and functionality;

  • make archived releases of key development milestones in the ready4 Zenodo community; and

  • submit new production releases to the Comprehensive R Archive Network (CRAN).

  1. Reactively elicit, review and address feedback and contributions from readyforwhatsnext community (e.g. bugs, issues and feature-requests).

The main tasks for curating model data collections include:

  1. Implementing ongoing improvements and updates to meta-data descriptors of data collections and individual files.

  2. Facilitating the linking of datasets to and from the ready4 Dataverse.

  3. Reviewing all collections within the ready4 Dataverse to identify datasets or files that are potentially out of date.

  4. Creating and publishing new versions of affected datasets with the necessary additions, deletions and edits and updated metadata. Prior versions of data collections remain publicly available.

  5. Informing the readyforwhatsnext community of the updated collections.

The main tasks for curating decision aids include:

  1. Monitoring the repositories of the software and the data used by the app for important updates.

  2. Deploying an updated app bundle of software and data to a test environment on Shinyapps.io.

  3. Testing the new deployment and elicit user feedback.

  4. Implementing any required fixes identified during testing.

  5. Deploying the updated app to a Shinyapps.io production environment.

  6. Informing the readyforwhatsnext community of the updated decision aid.

When?

Maintenance is an ongoing and current responsibility. Maintenance obligations are expected to grow considerably as we launch more production releases, extend the readyforwhatsnext model and grow a user community.

3 - Priority 3: Apply readyforwhatsnext to undertake replications and transfers

We want readyforwhatsnext to be used to implement replications and transfersof the original studies for which that software was developed.

Why?

Authoring of new readyforwhatsnext modules] can involve a significant investment of time and skills, an investment that is typically made in the context of implementing a modelling project for a scientific study. However, once authored, these modules may significantly streamline the implementation ofmay apply - more replications and generalisations mean more open access data and module customisations available to other users, enhancing the practical utility of readyforwhatsnext.

What?

We plan to demonstrate that studies implemented with readyforwhatsnext are relatively straightforward and efficient to replicate and transfer. The most important initial types of help we need with achieving this goal are funding, projects, code contributions and advice.

How?

The main tasks for implementing study replications and transfers are:

  1. Identify the example study to be replicated or transferred.

  2. Review that study’s analysis program:

  • do the data used in this program have similar structure / concepts / sampling to the data for which a new analysis is planned?
  • are modules used in that program from production release module libraries and do any of them require authoring of inheriting modules to selectively update aspects of module data-structures or algorithms?
  1. Create a new input dataset, labelling and (for non-confidential data) storing the data in an online repository (which can be kept private for now).

  2. (If new inheriting modules are required) Make a code contribution to create and test new inheriting modules.

  3. Adapt the original study’s analysis program to account for differences in input data, model modules and study reporting.

  4. Share new new analysis program in the ready4 Zenodo community.

  5. Ensure the online model input dataset is made public and submit it as a Linked Dataverse Dataset in the appropriate section of the ready4 Dataverse.

When?

In most cases, we recommend waiting until production releases of relevant module libraries are available. However, we are currently planning or actively undertaking some initial study analysis transfers using the development versions of our utility mapping and choice modelling module libraries. We are undertaking this work in parallel with testing and, where necessary, extending the required modules. We suggest that, should you believe that any of our development version software is potentially relevant to a study you wish to undertake, you first get in touch with our project lead to discuss the pros / cons and timing of using this software.

4 - Priority 4: Grow a user community

We want to develop a community of readyforwhatsnext users, contributors and stakeholders to sustain the development, maintenance, application, extension and impact of the project.

Why?

readyforwhatsnext is open source because we believe that transparent and collaborative approaches to model development are more likely to produce transparent, reusable and updatable models. No one modelling team has the resources or breadth of expertise and diversity of values to adequately address all of the important decision topics in youth mental health systems design and policy. Opportunities for modellers to test, re-use, update and combine each other’s work help make modelling projects more valid and tractable. Models have become increasingly complex, so simply publishing model code and data may have limited impact on improving model transparency. These aretefacts also need to be understood and tested. Clear documentation and frequent re-use in different contexts by multiple types of stakeholder make it more likely that errors and limitations can be exposed and remedied. Decentralising ownership of a model to an active community can help sustain the maintenance and extension of a model over the long term and mitigate risks and bottlenecks associated with dependency on a small number of team members.

What?

Our aim is to enhance the resilience, quality, legitimacy and impact of readyforwhatsnext by developing a community of users and contributors. The most important initial types of help we need with achieving this goal are funding, community support and advice.

How?

The process of developing the readyforwhatsnext community involves the following tasks:

  1. Creating and recruiting to volunteer advisory structures to elicit guidance on strategic, technical and conceptual topics.

  2. Enhancing the ease of use for third parties of existing framework authoring tools.

  3. Developing improved documentation and collateral (e.g. video tutorials) for readyforwhatsnext model modules and datasets.

  4. Configuring hosting repositories to implement clear collaborative development workflows.

  5. Promoting readyforwhatsnext to potential users and stakeholders.

  6. Continually expanding, diversifying and updating the authorship and maintenance responsibilities of all readyforwhatsnext software.

When?

The speed at which we undertake activities to grow a user community depends on our success at securing funding to provide required support infrastructure.

5 - Priority 5: Extend the scope of the readyforwhatsnext model

We want progressively extend the capability of readyforwhatsnext to explore new economic topics in youth mental health.

Why?

We hope that once launched, the readyforwhatsnext MVP systems model will be transparent, reusable and updatable and usful for addressing some important topics in youth mental health. However, there will inevitably be a much greater number of topics for which that the MVP model lacks the scope to adequately address. The two main scope limitations of the MVP model are expected to be omissions and level of abstraction. Some relevant system features will be ommitted from representation by the MVP model - for example our pipeline of platforms modules does not currently include any planned modules for modelling the operations of digital mental health services or schools. System features that are represented in the MVP model may only have one level of abstraction, which may be either too simple or too complex to be appropriately applied to some modelling goals.

What?

We plan to progressively extend the scope of readyforwhatsnext and the range of decision topics to which it can validly be applied. The most important initial types of help we need to achieve this goal are funding, projects and advice.

How?

The two main strategies for extending readyforwhatsnext are to translate existing models and develop new models. The process for developing new models is outlined elsewhere as the steps required to undertake a modelling project.

Translating existing models involves the following steps:

  1. Identify existing computational model(s) of relevant youth mental health systems to be redeveloped using the ready4 framework. Processes for identifying models could include:
  • A modelling team reviewing some of the models that they have previously implemented using other software; and/or

  • A systematic search of published literature and/or model repositories.

  1. (Optional - only if a single project plans to redevelop multiple models) Develop a data extraction tool into which data on relevant model features will be collated and categorised.

  2. Extract data on relevant model features. In the (highly likely) event that the reporting and documentation of the model being redeveloped lacks important details:

  • Contact the original model authors for assistance; and/or

  • Seek relevant advice to help determine plausible / appropriate values for missing data.

  1. Author module libraries for representing the included model(s).

  2. Author labelled open access datasets of model input data (which can be set to private for now).

  3. Author analysis and reporting programs designed to replicate the original modelling study / studies.

  4. Compare results from original and replication analyses. Ascertain the most plausible explanations for any divergence between results. Where this explanation relates to an error or limitation in the new readyforwhatsnext modules or analysis programs that have been authored, fix these issues.

  5. Complete documentation of model libraries, datasets and analyses.

  6. (If not already done) Publish / link to datasets on the ready4 Dataverse and share releases of libraries and programs in the ready4 Zenodo community.

When?

As our current focus in on developing the MVP model, we are not yet actively pursuing this priority. That will change if we are successful in securing more support from funders. In the mean time, if you are a researcher and/or modeller who is interested in leading a project that can help extend readyforwhatsnext, you can contact our project lead for guidance and/or to discuss the potential for collaborations.

6 - Priority 6: Integrate readyforwhatsnext software with other open source tools

We want coders and modellers working in languages such as python to be able to readily use and contribute to readyforwhatsnext.

Why?

Currently all readyforwhatsnext model software is developed using the R language. Although R is powerful, popular and flexible, there are limitations to relying on this toolkit alone. For some tasks, tools written in other languages provide superior performance. Requiring coders to have knowledge of R erects barriers to participation that thus the rate and quality of ready4’s development.

What?

We aim to support and integrate the development and use of tools to implement and extend the readyforhwatsnext in multiple languages, with an initial focus on python. The most important initial types of help we need with achieving this goal are advice, funding and code contributions.

How?

This is a longer term program of activity that has yet to be planned. We expect the first step in this process will be convening an advisory group of interested stakeholders to help us identify appropriate actions.

When

We have no active plans to progress this during our current 2023-2025 activity cycle. However, we are open to providing whatever support and guidance we can to researchers and organisations who are interested in leading a project of this nature.