Updating our dataset page to better meet user needs

Here at Toronto Open Data, we strive to treat the City’s Open Data Portal like a product, not a project; we’re always learning things from our users – what they need, what they’d like, what their pain points are — and continuously improving the portal based on their feedback. 

Something we learned recently was that our dataset page – the landing page for each individual dataset on the portal – wasn’t meeting user’s expectations. To help people get the most out of open data, each dataset page includes helpful information like a definition of the fields, a data quality score and a preview of the actual dataset 

The dataset page’s design, however, made it difficult to find this information. Take our Central Intake Calls open dataset, for example. The dataset includes information about Toronto’s  24/7 telephone referral service for emergency shelters and overnight accommodation. It contains of two sub-datasets, or “resources”; one for historic call volumes and one for call types. Under the previous design, the field definitions, data quality score, and preview function were only available for the first resource. 

We heard from users that this was a pain point, so we set out to fix it.  

Step 1: What do other cities do? 

We started where many government projects start: a jurisdictional scan. We looked at other cities’ portals to learn how they’ve tackled the same problem.  

We reviewed 13 other open data portals, and found all were using similar UI patterns, including: 

  • Tabs; where you could toggle between different open data resources by clicking a button. 
  • Accordions; where you could reveal or hide additional information about each resource. 
  • Hybrid; some combination of tabs and accordions.  
Screenshot from Vancouver's open data portal, where they've used tabs to split up their content
Screenshot from https://opendata.vancouver.ca/, where they’ve used tabs to split up their content

Step 2: What do our users do? 

We also reviewed our internal data about how users actually use the portal. A quick glance at our stats showed users are most likely to click on the links to download data or view column definitions. Regardless of the solution we chose, we knew these features should remain readily accessible to portal visitors.  

We also conducted a survey to help validate our assumptions at scale. We shared the questions with tech and data meetup groups in Toronto, and invited some of our academic partners (many of our users are students!) to complete the survey with their classes.  

We received 53 responses. Most identified as students, researchers, or hobbyists; only a minority of respondents identified as developers or professionals.    

The survey really helped validate some of our assumptions . For example, respondents said they were most likely to use the portal to download data, and many pointed to the lack of functionality on the dataset page as a pain point.  

Step 3: What would users prefer? 

Rather than reinvent the wheel, we made working versions of our dataset page with tabs, accordions and a mix of both and put them in front of users.  

We visited local meetups, including Civic Tech Toronto and Machine Learning Toronto, and conducted 16 user interviews. We showed people different versions of the dataset page and asked them to perform common tasks, like download a dataset or find the definition for a specific column.  

These usability tests almost unanimously showed that a hybrid approach was easiest for users to navigate.  

Testers appreciated the prominent download button on the accordion for each resource. They also noted the hybrid approach reduced both the overall visual clutter of the page and the need to scroll to find what they were looking for. 

Step 4: Implement!  

Thanks to our research, we felt confident that a hybrid approach would create the best user experience for consumers of open data. So that’s what we did.  

In July of 2025, we launched a revamped version of the dataset page. The new page is cleaner, requires less scrolling, and most importantly, lets users view additional information about every resource in a dataset.  

Check out the new layout, and let us know if there are other ways we could improve it! 

A dataset page on https://open.toronto.ca/ using a new "hybrid" layout
A dataset page on https://open.toronto.ca/ using a new “hybrid” layout

What else did we learn (and what can we do better)? 

User research is fantastic. It takes the guesswork out of product development and helps us make decisions about the Open Data Portal (and the program writ large) with confidence.  

We want to do more research. With more – and more diverse – people. And we want to do it better.  

But those are high level learnings. More specifically, we learned a few things about the practice of user research: 

  1. Usability tests are great for deciding on UI changes, but not so great for unearthing new pain points or opportunities; to do that, we need to conduct more open-ended conversations and really watch people interact with the portal.  
  1. Surveys don’t necessarily lead to deeper or more valuable insights about user experience, but they are useful for validating assumptions at scale. User research with just a few participants can show us what direction to take, but surveys give us confidence that a greater number of users feel the same.  

Help us improve the Open Data Portal 

We’re committed to doing more research and using it to inform the product roadmap for Toronto’s Open Data Portal. If you’re interested in helping us improve the portal’s user experience, or helping test new features, sign up to be an open data beta tester!   

Call for Feedback: Share your ideas for Toronto’s next Open Data Policy 

Hi, everyone! In our previous blog posts, we shared how (and why) the City is refreshing its open data policy and what we’ve learned so far from other cities

We’ve taken those learnings, consulted with our colleagues and drafted an early iteration of the policy. Now it’s time to get feedback from our users and other stakeholders!  

We want to ask City staff if the policy is easy to understand and feasible to implement? And we want to ask the public about how they’re using open data, what new data they’d like to see on the portal, and how the experience of using open data could be improved. 

Here’s how you can share feedback ⬇️   

Step 1: Review our goals  

We’re making changes to the open data policy and program to achieve four key goals:

  • to encourage all City of Toronto Divisions to be proactive participants in the open data program; 
  • to ensure the City is prioritizing the release of data that the public, businesses and City staff want and are likely to use; 
  • and to ensure open data is published in accordance with the City’s legislative responsibilities and its commitments to privacy, security and equity. 

*Step 2: Read the policy (this one’s optional!) 

If you’d like to read the latest policy draft, you can find it here.  

We know reading public policy isn’t everyone’s idea of a good time, so reviewing the policy is entirely optional; it isn’t required to complete our survey or share your ideas. However, reading the policy beforehand could help you refine those ideas and offer your best feedback. 

A quick disclaimer: the policy is still very much a draft. In design parlance, it’s in alpha. Nothing about the policy is fixed or final, and it doesn’t represent an official policy statement or commitment from the City. It’s likely to change based on the feedback you share with us!

Step 3: Complete the survey 

We’ve designed a short survey to collect your feedback on the City’s open data program and policy.  

There is one survey for the public, and a different survey for City staff. The deadline to fill out a survey is February 17, 2025.  

If you’re a real Open Data nerd (like us) and have more to say beyond the survey, don’t worry! We’re running a series of workshops to dive deeper into the policy this winter. You can sign up to participate at the end of the survey. We can’t promise we’ll be able to include everyone who registers, but we’ll do our best.  

Step 4: We’ll be in touch 

Once we’ve had a chance to take in and analyze your feedback, we’ll share back what we’ve learned. We’ll do that right here on our blog, so stay tuned! 

From everyone at Toronto Open Data: thank you for your participation and insight!

A jurisdictional scan of open data policies

A pair of binoculars resting on a desk.

In our first blog post, we talked about how we’re reviewing other cities’ open data policies and chatting with colleagues in those cities about their programs. In nerdy government parlance, we’re doing a jurisdictional scan.  

We’d like to share what we’ve learned, what we like about other cities’ policies, and how their work informs our work.  

Edmonton 

Our colleagues in Alberta’s capital are leaders in municipal open data. They were the first city in North America to sign the International Open Data Charter and they’ve won numerous awards from the Canadian Open Data Society.  

Edmonton’s policy suite consists of a broader Open City Policy from 2015, and a specific Open Data Strategy from 2017.  

Their policy includes an Open Data Advisory group “with representatives from … privacy advisors, legal advisors, and data stewards.” Meeting requirements for both transparency and data privacy can be challenging, and concerns about publishing sensitive information can be a blocker to opening data. Having dedicated experts on hand to assess whether data is safe to publish can go a long way towards assuaging those concerns.  

Edmonton’s strategy also includes a commitment to “co-create data with interested users through crowdsourcing,” which is something we’re hoping to enable through our policy refresh. 

New York City 

In New York, open data isn’t just a policy, it’s the law. Local Law 11 of 2012 requires municipal agencies (in Toronto, we use the term “divisions”) to submit annual open data plans, including “a summary … of public data sets under the control of each agency.” Agencies must prioritize those datasets for inclusion on the open data portal, set timelines for publication and report on their compliance.   

Even with the force of law behind them, New York’s open data team continues to invest significant resources into building relationships with data owners and supporting them throughout the publication process.  

One way they do that is through the Open Data Coordinators community. The bylaw requires agencies to appoint data-savvy and “well-networked” staff to coordinate open data efforts, and the team provides templates and trainings to help them. Communities of practice can be force multipliers for policy, and we’ll look to emulate NYC’s approach.   

We’re also fans of prioritizing data for publication. Cities collect and create A LOT of data and we ought to focus on releasing the data that that creates the most impact. NYC agencies must rank data based on whether it: 

  • can be used to increase accountability and responsiveness;  
  • improves public knowledge of the agency and its operations;  
  • furthers the mission of the agency;  
  • creates economic opportunity;   
  • responds to a need or demand identified by public consultation 

Agencies also “must consider public feedback when prioritizing which datasets to release.” That really aligns with our goal of making Toronto’s open data program more user-centred.  

Montréal 

Our neighbours to the east (or nos voisins de l’Est) have one of the most contemporary open data policy suites of any city in our scan. Montreal’s Open Data Policy, Digital Data Charter and Data Governance Directive have all been written or updated since 2020. 

Montreal’s policy is similar to others on this list, but a few provisions stood out to us. 

First, Montreal grants the City Manager “the ultimate authority to decide on the degree of openness of data held in the city’s trust,” meaning they can require data to be opened even if divisions are reluctant to publish it.  

Second, their policy “commits to implementing automation mechanisms to ensure that data is updated at regular intervals.” Creating automated pipelines as part of the open data process is one of the best ways to improve the quality and currency of our data.  

Lastly, Montreal “commits to publishing an inventory of data held in its trust, regardless of their degree of openness.” A data inventory is a powerful tool – it can help the public see what data is available and help prioritize data for publication – but building one can be challenging. Given the pace, volume and complexity of data creation in government, the idea of an inventory – a perfect snapshot of ALL the City’s data – can feel daunting, if not impossible.  

Montreal does something awesome here: they acknowledge their inventory is “constantly evolving” and may be incomplete. Rather than a perfect list, they reframe the inventory as an ongoing dialogue, something to iteratively grow and improve over time. 

This approach really resonates, and we applaud Montreal for not letting perfect be the enemy of good.  

Hamilton 

Like New York, Hamilton grants a single executive the ability to make decisions about data; their Chief Digital and Data Officer (a role that doesn’t exist here in Toronto), has “the authority to make the final decision on the posting of a Dataset.” Their policy also includes an “Open Data Evaluation Group,” whose job is to “review open dataset submissions.” 

What really stands out about Hamilton’s Open Data Policy is how it was created. The city ran an innovative and transparent process to garner public feedback, using the Engage Hamilton platform. They posted drafts of the policy for public input and maintained a change log of updates. Even though the policy was finalized last year, you can still see all the comments they received

Hamilton really lived out their principles while developing their policy and we intend to do the same.  

San Francisco 

Despite being one of the older policies in our scan, San Francisco’s has real teeth. Departments must publish a robust inventory of data under their control and a catalogue of data that could be made public, including “both raw data sets and application programming interfaces (API’s)” and “data contained in already-operating information technology systems.” They must then make “reasonable efforts” to open all that data, provided it conforms with technical standards and privacy laws. 

San Francisco also maintains an open data coordinator community and their supporting materials are excellent! They’re hosted on the equally excellent Gitbook tool, which makes them both open and easily adapted by others. 

We really like their guide to prioritizing data. It’s an elegant matrix that compares how sensitive a given dataset is with how in demand it is. Data that’s low risk and high demand jumps to the top of the publishing queue as a result!  

San Francisco’s open data prioritization matrix uses two axes, demand and classification (whether data is classified as public, sensitive or protected), to decide which datasets are the highest priority for publishing. [Source link]

Philadelphia 

Philly’s Open Data policy was established by executive order in 2012. A year later, the City was being hailed as an open government leader in the U.S. They also created one of the first online guides to help municipal staff identify and publish data – a practice that is now commonplace in cities like New York and San Francisco (shoutout to Mark Headd, Philly’s first Chief Data Officer, for pointing the way!).  

Because it started with an executive order, Philadelphia’s policy is very clear on timelines and targets. Departments are tasked with identifying “high impact” datasets and getting “at least three” of them published in “120 days.” Releasing a small number of priority datasets feels like a smart approach and acknowledges the resource constraints data teams face in government. It’s alright to not release ALL the data TOMORROW, but let’s meaningfully commit to releasing data with the highest potential for impact.

Chicago 

Did you know that Chicago is officially a sister city of Toronto? In addition to our similar populations and lakefront statuses, we also share a commitment to open data. 😊 

Like Philly, Chicago’s policy was established by executive order in 2012, and the two documents have a lot in common. Like other cities in our scan, Chicago publishes an annual open data compliance report – which is something Toronto’s Council has asked for

Chicago’s policy includes a directive to add “contract provisions to promote open data policies in technology-related procurements.” We’re not sure exactly how this works in practice, but users of Toronto’s open data portal have expressed interest in more information about how the City builds and buys technology, so it’s something we’ll explore. 

We’re making a new Open Data Policy for Toronto!  

Hi! We’re Toronto’s Open Data team; we’re Denis, Mackenzie, Reham, Reza, Mohammad, Yanan, Luke, Adam and Swati. We help Torontonians use and learn about City data.  

We’re the team behind Toronto’s Open Data Portal; every day, we work with colleagues across the City to help make data available.  

We’re also responsible for the City’s Open Data Policy, the rules and ideas that govern how the City opens its data. If you think of the Open Data Portal as the branches of a tree, then the policy is the roots.  

First introduced in 2011, the policy has enabled the Open Data program to grow and flourish for over a decade. We have nearly 500 datasets on the portal, representing data from 43 of the City’s 44 divisions. And with over 10,000 monthly visitors, we’re one of Canada’s most active municipal data portals. 

But a tree can only grow as big as its roots allow. To continue growing the quality and quantity of data on the portal, and to ensure open data is providing the most value to users – from staff to Councillors, to community advocates and businesses – we need to nourish our roots.  

That’s why we’re updating Toronto’s Open Data Policy! 

Why do we need a new policy? 

The original policy was created when the open data movement was still in its infancy, and as a result, it’s a bit light on specifics. There’s not much detail on roles and responsibilities, or about what “good” looks like when it comes to divisions and their data. 

We’ve learned a lot since the policy was introduced about what makes open data successful and how to foster a culture of openness in organizations. It’s time to codify those learnings into policy.  

Our current processes for publishing open data are working well, but they’re ad hoc. Our team will hear about a dataset and collaborate with the data owners to get it ready for publication, or vice versa.  

That approach has helped us get to nearly 500 published datasets, but there’s a growing backlog of requests. We’d love to open ALL the data, but we need to prioritize our efforts (check out how San Francisco does this). By creating a policy that connects data with user needs, we can focus on publishing the most impactful data. Data that’s in high demand. Data that can enable staff or the public to do innovative things. Or data about the issues that are top of mind for Torontonians

Oh, and last, but certainly not least, City Council has asked us to update the policy.  

How are we making the policy? 

Our approach to policy development is informed by some key principles.  

First, we want to work in the open. It’s the open data policy after all, and we want to be as transparent as possible. We’ll share updates about the policy on our public-facing blog, and we’ll post draft iterations and change logs for the policy as we work on it.  

Traditional methods like steering committees and stakeholder groups (which we’re doing too!) rely on assumptions about who is interested in or impacted by a policy. By opening our work up, we create avenues to get feedback from other valuable – if unexpected – places.  

We’re also committed to co–designing the policy with its users: staff on the ground who will put the policy into practice. When policy development is siloed from implementation, we risk creating policies that can’t — or won’t — be adopted. We want our policy to be an enabler for teams working with data, so we’re involving them in the process and asking them to prototype the policy with us.  

Lastly, we don’t want to reinvent the wheel. We’ve learned a lot since the Open Data program launched in 2009, but so have our colleagues in cities like Edmonton, Hamilton (which led an innovative public consultation process to develop their open data policy), Montreal, San Francisco (which has outstanding support materials for staff) or New York (where open data isn’t just a policy, it’s the law).  

We’re looking at these cities’ policies and chatting with our contacts about how those policies were rolled out. Just like with open source software, we’re going to take the best parts of their policies and adapt them to Toronto.  

How can you get involved? 

If you’re curious where we’re headed, check out the Sunlight Foundation’s guidelines for open data policies. Sunlight’s core principles are included in our current policy, and we’re inspired by their work. 

Over the coming months, we’ll be doing lots of consultations:  

  • We want to learn from staff about how we can make the open data publishing process easier, or how open data can be a lever to create other data products.  
  • We want to connect with business intelligence teams at the City to learn more about divisional data assets.  
  • We want to connect with those leading other data policies and frameworks at the City, so we can work in parallel.  
  • We want to connect with external users – journalists, academics, entrepreneurs, community advocates or civic technologists – to understand what data they’d like to see on the portal (or even what data they may be able to contribute!).  

We’ll post about all these engagements on our blog, so stay tuned for opportunities to contribute. 

In the interim, our inbox is always open! Reach out to opendata@toronto.ca if you have ideas or questions.  

We look forward to co-designing Toronto’s next Open Data Policy with you!