This section includes release notes for supported W&B Server releases. For releases that are no longer supported, refer to Archived releases.
1.1 - Release policies and processes
Release process for W&B Server
This page gives details about W&B Server releases and W&B’s release policies. This page relates to W&B Dedicated Cloud and Self-Managed deployments. To learn more about an individual W&B Server release, refer to W&B release notes.
W&B supports a major W&B Server release for 6 months from its initial release date.
Dedicated Cloud instances are automatically updated to maintain support.
Customers with Self-managed instances are responsible for upgrading in time to maintain support. Avoid staying on an unsupported version.
W&B strongly recommends customers with Self-managed instances to update their deployments with the latest release at minimum once per quarter to maintain support and receive the latest features, performance improvements, and fixes.
Release types and frequencies
Major releases are produced monthly, and may include new features, enhancements, performance improvements, medium and low severity bug fixes, and deprecations. An example of a major release is 0.68.0.
Patch releases within a major version are produced as needed, and include critical and high severity bug fixes. An example of a patch release is 0.67.1.
Release rollout
After testing and validation are complete, a release is first rolled out to all Dedicated Cloud instances to keep them fully updated.
After additional observation, the release is published, and Self-managed deployments can upgrade to it on their own schedule, and are responsible for upgrading in time to comply with the Release support and End of Life (EOL) policy. Learn more about upgrading W&B Server.
Downtime during upgrades
When a Dedicated Cloud instance is upgraded, downtime is generally not expected, but may occur in certain situations:
If a new feature or enhancement requires changes to the underlying infrastructure, such as compute, storage or network.
To roll out a critical infrastructure change such as a security fix.
If the instance’s current version has reached its End of Life (EOL) and is upgraded by W&B to maintain support.
For Self-managed deployments, the customer is responsible for implementing a rolling update process that meets their service level objectives (SLOs), such as by running W&B Server on Kubernetes.
Feature availability
After installing or upgrading, certain features may not be immediately available.
Enterprise features
An Enterprise license includes support for important security capabilities and other enterprise-friendly functionality. Some advanced features require an Enterprise license.
Dedicated Cloud includes an Enterprise license and no action is required.
On Self-managed deployments, features that require an Enterprise license are not available until it is set. To learn more or obtain an Enterprise license, refer to Obtain your W&B Server license.
Private preview and opt-in features
Most features are available immediately after installing or upgrading W&B Server. The W&B team must enable certain features before you can use them in your instance.
Any feature in a preview phase is subject to change. A preview feature is not guaranteed to become generally available.
Private preview: W&B invites design partners and early adopters to test these features and provide feedback. Private preview features are not recommended for production environments.
The W&B team must enable a private preview feature for your instance before you can use it. Public documentation is not available; instructions are provided directly. Interfaces and APIs may change, and the feature may not be fully implemented.
Public preview: Contact W&B to opt in to a public preview to try it out before it is generally available.
The W&B team must enable a public preview feature before you can use it in your instance. Documentation may not be complete, interfaces and APIs may change, and the feature may not be fully implemented.
To learn more about an individual W&B Server release, including any limitations, refer to W&B Release notes.
1.2 - 0.68.0
April 29, 2025
W&B Server v0.68 includes enhancements to various types of panels and visualizations, security improvements for Registry, Weave, and service accounts, performance improvements when forking and rewinding runs, and more.
Registry admins can define and assign protected aliases to represent key stages of your development pipeline. A protected alias can be assigned only by a registry admin. W&B blocks other users from adding or removing protected aliases from versions in a registry using the API or UI.
You can now filter console logs based on a run’s x_label value. During distributed training, this optional parameter tracks the node that logged the run.
You can now move runs between Groups, one by one or in bulk. Also, you can now create new Groups after the initial logging time.
Line plots now support synchronized zooming mode, where zooming to a given range on one plot automatically zooms into the same range on all other line plots with a common x-axis. Turn this on in the workspace display settings for line plots.
Line plots now support formatting custom metrics as timestamps. This is useful when synchronizing or uploading runs from a different system.
You can now slide through media panels using non-_step fields such as epoch or train/global_step (or anything else).
In Tables and plots in Query Panels that use runs or runs.history expressions, a step slider allows you to step through the progress on your metrics, text, or media through the course of your runs. The slider supports stepping through non-_step metrics.
You can now customize bar chart labels using a font size control.
Private preview
Private preview features are available by invitation only. To request enrollment in a private preview, contact support or your AISE.
Personal workspace templates allow you to save your workspace setup so it is automatically applied to your new projects. Initially, you can configure certain line plot settings such as the default X axis metric, smoothing algorithm, and smoothing factor.
Improved Exponentially-weighted Moving Average (EMA) smoothing provides more reliable smoothed lines when operating on complete, unbinned data. In most cases, smoothing is handled at the back end for improved performance.
Weave
Chat with fine-tuned models from within your W&B instance. Playground is now supported in Dedicated Cloud. Playground is a chat interface for comparing different LLMs on historical traces. Admins can add API keys to different model providers or hook up custom hosted LLM providers so your team can interact with them from within Weave.
Open Telemetry Support. Now you can log traces via OpenTelemetry (OTel). Learn more.
Weave tracing has new framework integrations: CrewAI, OpenAI’s Agent SDK, DSPy 2.x and Google’s genai Python SDK.
Playground supports new OpenAI models: GPT‑4.1, GPT‑4.1 mini, and GPT‑4.1 nano.
Build labeled datasets directly from traces, with your annotations automatically converted into dataset columns. Learn more.
Security
Registry admins can now designate a service account in a registry as either a Registry Admin or a Member. Previously, the service account’s role was always Registry Admin. Learn more.
Performance
Improved the performance of many workspace interactions, particularly in large workspaces. For example, expanding sections and using the run selector are significantly more responsive.
Improved Fork and Rewind Performance.
Forking a run creates a new run that uses the same configuration as an existing run. Changes to the forked run do not the parent run, and vice versa. A pointer is maintained between the forked run and the parent. Rewinding a run lets you log new data from that point in time without losing the existing data.
In projects with many nested forks, forking new runs is now much more efficient due to improvements in caching.
Fixes
Fixed a bug that could prevent an organization service account from being added to new teams.
Fixed a bug that could cause hover marks to be missing for grouped lines.
Fixed a bug that could include invalid project names in the Import dropdown of a Report panel.
Fixed a display bug in the alignment of filters in the run selector.
Fixed a page crash when adding a timestamp Within Last filter
Fixed a bug that could prevent the X-axis from being set to Wall Time in global line plot settings.
Fixed a bug that could prevent image captions from appearing when they are logged to a Table.
Fixed a bug that could prevent sparse metrics from showing up in panels.
In Run Overview pages, the Description field is now named Notes.
1.3 - 0.67.0
March 28, 2025
Features
In Reports, you can now give a run a custom display name per panel grid. This allows you to replace the run’s (often long and opaque) training-time name with one that is more meaningful to your audience. The report updates the name in all panel grids, helping you to explain your hard-won experimental insights to your colleagues in a concise and readable way. The original run name remain intact in the project, so doing this won’t disrupt your collaborators.
When you expand a panel in the workspace, it now opens in full screen mode with more space. In this view, line plots now render with more granular detail, using up 10,000 bins. The run selector appear next to the panel, letting you easily toggle, group, or filter runs in context.
From any panel, you can now copy a unique URL that links directly to that panel’s full screen view. This makes it even easier to share a link to dig into interesting or pathological patterns in your plots.
Run Comparer is a powerful tool you can use to compare the configurations and key metrics of important runs alongside their loss curves. Run Comparer has been updated:
Faster to add a Run Comparer panel, as an expanded option in Add Panels.
By default, a Run Comparer panel takes up more space, so you can see the values right away.
Improved readability and legibility of a Run Comparer panel. You can use new controls to quickly change row and column sizes so you can read long or nested values.
You can copy any value in the panel to your clipboard with a single click.
You can search keys with regular expressions to quickly find exactly the subset of metrics you want to compare across. Your search history is saved to help you iterate efficiently between views.
Run Comparer is now more reliable at scale, and handles larger workspaces more efficiently, reducing the likelihood of poor performance or a crashed panel.
Segmentation mask controls have been updated:
You can now toggle each mask type on or off in bulk, or toggle all masks or all images on or off.
You can now change each class’s assigned color, helping to avoid confusion if multiple classes use the same color.
When you open a media panel in full screen mode, you can now use the left or right arrows on your keyboard to step through the images, without first clicking on the step slider.
Media panels now color run names, matching the run selector. This makes it easier to associate a run’s media values with related metrics and plots.
In the run selector, you can now filter by whether a run has certain media key or not.
You can now move runs between groups in the W&B App UI, and you can create new groups after the run is logged.
Automations can now be edited in the UI
An automation can now notify a Slack channel for artifact events. When creating an automation, select “Slack notification” for the Action type.
Registry now supports global search by default, allowing you to search across all registries by registry name, collection name, alias, or tag.
In Tables and Query panels that use the runs expression, you can use the new Runs History step slider and drop-down controls to view a table of metrics at each step of a run.
Playground in Weave supports new models: OpenAI’s gpt-4.5-preview and Deepseek’s deepseek-chat and deepseek-reasoner.
Weave tracing has two new agent framework integrations: CrewAI and OpenAI’s Agent SDK.
In the Weave UI, you can now build Datasets from traces. Learn more: https://weave-docs.wandb.ai/guides/core-types/datasets#create-edit-and-delete-a-dataset-in-the-ui
The Weave Python SDK now provides a way to filter the inputs and outputs of your Weave data to ensure sensitive data does not leave your network perimeter. You can configure to redact sensitive data. Learn more: https://weave-docs.wandb.ai/guides/tracking/redact-pii/
To streamline your experience, the System tab in the individual run workspace view will be removed in an upcoming release. View full information about system metrics in the System section of the workspace. For questions, contact support@wandb.com.
Security
golang crypto has been upgraded to v0.36.0.
golang oauth2 has been upgraded to v0.28.0.
In Weave, pyarrow is now pinned to v17.0.0.
Performance
Frontend updates significantly reduce workspace reload times by storing essential data in the browser cache across visits. The update optimizes loading of saved views, metric names, the run selector, run counts, W&B’s configuration details, and the recomputation of workspace views.
Registry overview pages now load significantly faster.
Improved the performance of selecting metrics for the X, Y, or Z values in a scatter plot in a workspace with thousands of runs or hundreds of metrics.
Performance improvements to Weave evaluation logging.
Fixes
Fixed a bug in Reports where following a link to a section in the report would not open to that section.
Improved the behavior of how Gaussian smoothing handles index reflection, matching SciPy’s default “reflect” mode.
A Report comment link sent via email now opens directly to the comment.
Fixed a bug that could crash a workspace if a sweep takes longer than 2 billion compute seconds by changing the variable type for sweep compute seconds to int64 rather than int32.
Fixed display bugs that could occur when a report included multiple run sets.
Fixed a bug where panels Quick Added to an alphabetically sorted section were sorted incorrectly.
Fixed a bug that generated malformed user invitation links.
1.4 - 0.66.0
March 06, 2025
Features
In tables and query panels, columns you derive from other columns now persist, so you can use them for filtering or in query panel plots.
Security
Limited the maximum depth for a GraphQL document to 20.
Upgraded pyarrow to v17.0.0.
1.5 - 0.65.0
January 30, 2025
Features
From a registry’s Settings, you can now update the owner to a different user with the Admin role. Select Owner from the user’s Role menu.
You can now move a run to a different group in the same project. Hover over a run in the run list, click the three-vertical-dots menu, and choose Move to another group.
You can now configure whether the Log Scale setting for line plots is enabled by default at the level of the workspace or section.
To configure the behavior for a workspace, click the action ... menu for the workspace, click Line plots, then toggle Log scale for the X or Y axis.
To configure the behavior for a section, click the gear icon for the section, then toggle Log scale for the X or Y axis.
1.6 - 0.63.0
December 10, 2024
Features
Weave is now generally available (GA) in Dedicated Cloud on AWS. Reach out to your W&B team if your teams are looking to build Generative AI apps with confidence and putting those in production.
The release includes the following additional updates:
W&B Models now seamlessly integrates with Azure public cloud. You could now create a Dedicated Cloud instance in an Azure region directly from your Azure subscription and manage it as an Azure ISV resource. This integration is in private preview.
Enable automations at the Registry level to monitor changes and events across all collections in the registry and trigger actions accordingly. This eliminates the need to configure separate webhooks and automations for individual collections.
Ability to assign x_label, e.g. node-0, in run settings object to distinguish logs and metrics by label, e.g. node, in distributed runs. Enables grouping system metrics and console logs by label for visualization in the workspace.
Coming soon with a patch release this week, you will be able to use organization-level service accounts to automate your W&B workloads across all teams in your instance. You would still be able to use existing team-level service accounts if you would like more control over the access scope of a service account.
Allow org-level service accounts to interact with Registry. Such service accounts can be invited to a registry using the invite modal and are displayed in the members table along with respective organization roles.
Fixes
Fixed an issue where users creating custom roles including the Create Artifact permission were not able to log artifacts to a project.
Fixed the issue with metadata logging for files in instances that have subpath support configured for BYOB.
Block webhook deletion if used by organization registry automations.
1.7 - Archived Releases
Archived releases have reached end of life and are no longer supported. A major release and its patches are supported for six months from the initial release date. Release notes for archived releases are provided for historical purposes. For supported releases, refer to Releases.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
1.7.1 - 0.61.0
October 17, 2024
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
This is a mini-feature and patch release, delivered at a different schedule than the monthly W&B server major releases
Organization admins can now configure Models seats and access control for both Models & Weave in a seamless manner from their organization dashboard. This change allows for a efficient user management when Weave is enabled for a Dedicated Cloud or Self-managed instance.
Weave pricing is consumption-based rather than based on number of seats used. Seat management only applies to the Models product.
Fixed an issue where underlying database schema changes as part of release upgrades could timeout during platform startup time.
Added more performance improvements to the underlying parquet store service, to further improve the chart loading times for users. Parquet store service is only available on Dedicated Cloud, and Self-managed instances based on W&B kubernetes operator.
Addressed the high CPU utilization issue for the underlying parquet store service, to make the efficient chart loading more reliable for users. Parquet store service is only available on Dedicated Cloud, and Self-managed instances based on W&B kubernetes operator.
1.7.2 - 0.60.0
September 26, 2024
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
Final updates for 1.1.1 Compliance of Level AA 2.2 for Web Content Accessibility Guidelines (WCAG) standards.
W&B can now disable auto-version-upgrade for customer-managed instances using the W&B kubernetes operator. You can request this to your W&B team.
Note that W&B requires all instances to upgrade periodically to comply with the 6-month end-of-life period for each version. W&B does not support versions older than 6 months.
Due to a release versioning issue, 0.60.0 is the next major release after 0.58.0. The 0.59.0 was one of the patch releases for 0.58.0.
Fixes
Fixed a bug to allow instance admins on Dedicated Cloud and Customer-managed instances to access workspaces in personal entities.
SCIM Groups and Users GET endpoints now filter out service accounts from the responses. Only non service account users are now returned by those endpoints.
Fixed a user management bug by removing the ability of team admins to simultaneously delete a user from the overall instance while deleting them from a team. Instance or Org admins are responsible to delete a user from the overall instance / organization.
Performance improvements
Reduced the latency when adding a panel by up to 90% in workspaces with many metrics.
Improved the reliability and performance of parquet exports to blob storage when runs are resumed often.
Runs export to blob storage in parquet format is available on Dedicated Cloud and on Customer-managed instances that are enabled using the W&B kubernetes operator.
1.7.3 - 0.58.1
September 04, 2024
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
W&B now supports sub-path for Secure storage connector i.e. Bring your own bucket capability. You can now provide a sub-path when configuring a bucket at the instance or team level. This is only available for new bucket configurations and not for existing configured buckets.
W&B-managed storage on newer Dedicated Cloud instances in GCP & Azure will by default be encrypted with W&B managed cloud-native keys. This is already available on AWS instances. Each instance storage is encrypted with a key unique to the instance. Until now, all instances on GCP & Azure relied on default cloud provider-managed encryption keys.
Makes the fields in the run config and summary copyable on click.
If you’re using W&B kubernetes operator for a customer-managed instance, you can now optionally use a custom CA for the controller manager.
We’ve modified the W&B kubernetes operator to run in a non-root context by default, aligning with OpenShift’s Security Context Constraints (SCCs). This change ensures smoother deployment of customer-managed instances on OpenShift by adhering to its security policies.
Fixes
Fixed an issue where exporting panels from a workspace to a report now correctly respects the panel search regex.
Fixed an issue where setting GORILLA_DISABLE_PERSONAL_ENTITY to true was not disabling users from creating projects and writing to existing projects in their personal entities.
Performance improvements
We have significantly improved performance and stability for experiments with 100k+ logged points. If you’ve a customer-managed instance, this is available if the deployment is managed using the W&B kubernetes operator.
Fixed issue where saving changes in large workspaces would be very slow or fail.
Improved latency of opening workspace sections in large workspaces.
1.7.4 - 0.57.2
July 24, 2024
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
You can now use JWTs (JSON Web Tokens) to access your W&B instance from the wandb SDK or CLI, using the identity federation capability. The feature is in preview. Refer to Identity federation and reach out to your W&B team for any questions.
The 0.57.2 release also includes these capabilities:
New Add to reports drawer improvements for exporting Workspace panels into Reports.
Artifacts metadata filtering in the artifact project browser.
Pass in artifact metadata in webhook payload via ${artifact_metadata.KEY}.
Added GPU memory usage panels to the RunSystemMetrics component, enhancing GPU metrics visualization for runs in the app frontend.
Mobile users now enjoy a much smoother, more intuitive Workspace experience.
If you’re using W&B Dedicated Cloud on GCP or Azure, you can now enable private connectivity for your instance, thus ensuring that all traffic from your AI workloads and optionally browser clients only transit the cloud provider private network. Refer to Private connectivity and reach out to your W&B team for any questions.
Team-level service accounts are now shown separately in a new tab in the team settings view. The service accounts are not listed in the Members tab anymore. Also, the API key is now hidden and can only be copied by team admins.
Dedicated Cloud is now available in GCP’s Seoul region.
Fixes
Gaussian smoothing was extremely aggressive on many plots.
Fixed issue where pressing the Ignore Outliers in Chart Scaling button currently has no effect in the UI workspace.
Disallow inviting deactivated users to an organization.
Fixed an issue where users added to an instance using SCIM API could not onbioard successfully.
Performance improvements
Significantly improved performance when editing a panel’s settings and applying the changes.
Improved the responsiveness of run visibility toggling in large workspaces.
Improved chart hovering and brushing performance on plots in large workspaces.
Reduced workspace memory usage and loading times in workspaces with many keys.
1.7.5 - 0.56.0
June 29, 2024
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
The new Full Fidelity line plot in W&B Experiments enhances the visibility of training metrics by aggregating all data along the x-axis, displaying the minimum, maximum, and average values within each bucket, allowing users to easily spot outliers and zoom into high-fidelity details without downsampling loss.Learn more in our documentation.
Due to a release versioning issue, 0.56.0 is the next major release after 0.54.0. The 0.55.0 was a patch release for 0.54.0.
The 0.56.0 release also includes these capabilities:
You can now use cross-cloud storage buckets for team-level BYOB (secure storage connector) in Dedicated Cloud and Self-managed instances. For example, in a W&B instance on AWS, you can now configure Azure Blob Storage or Google Cloud Storage for team-level BYOB, and so on for each cross-cloud combination.
If you use the SCIM API, you will also see a couple of minor improvements:
The API now has a more pertinent error message in case of authentication failures.
Relevant endpoints now return the full name of a user in the SCIM User object if it’s available.
Fixes
The fix resolves an issue where deleting a search term from a runset in a report could delete the panel or cause the report to crash by ensuring proper handling of selected text during copy/paste operations.
The fix addresses a problem with indenting bulleted items in reports, which was caused by an upgrade of slate and an additional check in the normalization process for elements.
The fix resolves an issue where text could not be selected from a panel when the report was in edit mode.
The fix addresses an issue where copy-pasting an entire panel grid in a Report using command-c was broken.
The fix resolves an issue where report sharing with a magic link was broken when a team had the Hide this team from all non-members setting enabled.
The fix introduces proper handling for restricted projects by allowing only explicitly invited users to access them, and implementing permissions based on project members and team roles.
The fix allows instance admins to write to their own named workspaces, read other personal and shared workspaces, and write to shared views in private and public projects.
The fix resolves an issue where the report would crash when trying to edit filters due to an out-of-bounds filter index caused by skipping non-individual filters while keeping the index count incremental.
The fix addresses an issue where unselecting a runset caused media panels to crash in a report by ensuring only runs in enabled runsets are returned.
The fix resolves an issue where the parameter importance panel crashes on initial load due to a violation of hooks error caused by a change in the order of hooks.
The fix prevents chart data from being reloaded when scrolling down and then back up in small workspaces, enhancing performance and eliminating the feeling of slowness.
1.7.6 - 0.54.0
May 24, 2024
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
You can now configure Secure storage connector (BYOB) at team-level in Dedicated Cloud or Self-managed instances on Microsoft Azure.
Organization admins can now enforce privacy settings across all W&B teams by setting those at the organization level, from within the Settings tab in the Organization Dashboard.
W&B recommends to notify team admins and other users before making such enforcement changes.
Enable direct lineage option for artifact lineage DAG
It’s now possible to restrict Organization or Instance Admins from self-joining or adding themselves to a W&B team, thus ensuring that only Data & AI personas have access to the projects within the teams.
W&B advises to exercise caution and understand all implications before enabling this setting. Reach out to your W&B team for any questions.
Dedicated Cloud on AWS is now also available in the Seoul (S. Korea) region.
Fixes
Fix issue where Reports where failing to load on Mobile.
Fix link to git diff file in run overview.
Fixed the intermittently occurring issue related to loading of Organization Dashboard for certain users.
1.7.7 - 0.52.2
April 25, 2024
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
You can now enforce username and full name for users in your organization, by using OIDC claims from your SSO provider. Reach out to your W&B team or support if interested.
You can now disable use of personal projects in your organization to ensure that all projects are created within W&B teams and governed using admin-enforced guidelines. Reach out to your W&B team or support if interested.
Option to expand all versions in a cluster of runs or artifacts in the Artifacts Lineage DAG view.
UI improvements to Artifacts Lineage DAG - the type will now be visible for each entry in a cluster.
Fixes
Added pagination to image panels in media banks, displaying up to 32 images per page with enhanced grid aesthetics and improved pagination controls, while introducing a workaround for potential offset inconsistencies.
Resolved an issue where tooltips on system charts were not displaying by enforcing the isHovered parameter, which is essential for the crosshair UI visibility.
Unset the max-width property for images within media panels, addressing unintended style constraints previously applied to all images.
Fixed broken config overrides in launch drawer.
Fixed Launch drawer’s behavior when cloning from a run.
1.7.8 - 0.51.0
March 20, 2024
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
You can now save multiple views of any workspace by clicking “Save as a new view” in the overflow menu of the workspace bar.
Learn more about how Saved views can enhance your team’s collaboration and project organization.
When you create a restricted project within a team, you can add specific members from the team. Unlike other project visibility scopes, all members of a team do not get implicit access to a restricted project.
Enhanced Run Overview page performance: now 91% faster on load, with search functionality improved by 99.9%. Also enjoy RegEx search for Config and Summary data.
New UX for Artifacts Lineage DAG introduces clustering for 5+ nodes at the same level, preview window to examine a node’s details, and a significant speedup in the graph’s loading time.
The template variable values used for a run executed by launch, for example GPU type and quantity, are now shown on the queue’s list of runs. This makes it easier to see which runs are requesting which resources.
Cloning a run with Launch now pre-selects the overrides, queue, and template variable values used by the cloned run.
Instance admins will now see a Teams tab in the organization dashboard. It can be used to join a specific team when needed, whether it’s to monitor the team activity as per organizational guidelines or to help the team when team admins are not available.
SCIM User API now returns the groups attribute as part of the GET endpoint, which includes the id of the groups / teams a user is part of.
All Dedicated Cloud instances on GCP are now managed using the new W&B Kubernetes Operator. With that, the new Parquet Store service is also available.
Parquet store allows performant & cost efficient storage of run history data in parquet format in the blob storage. Dedicated Cloud instances on AWS & Azure are already managed using the operator and include the parquet store.
Dedicated Cloud instances on AWS have been updated to use the latest version of the relational data storage, and the compute infrastructure has been upgraded to a newer generation with better performance.
Advanced Notice: We urge all customers who use Webhooks with Automations to add a valid A-record for their endpoints as we are going to disallow using IP address based Webhook URLs from the next release onwards. This is being done to protect against SSRF vulnerability and other related threat vectors.
Fixes
Fixed issue where expressions tab was not rendering for line plots.
Use display name for sweeps when grouped by sweeps in charts and runs table.
Auto navigation to runs page when selecting job version.
1.7.9 - 0.50.2
February 26, 2024
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Feature
Add panel bank setting to auto-expand search results
Better visibility for run queue item issues
Dedicated Cloud customers on AWS can now use Privatelink to securely connect to their deployments.
The feature is in private preview and will be part of an advanced pricing tier at GA. Reach out to your W&B team if interested.
You can now automate user role assignment for organization or team scopes using the SCIM role assignment API
All Dedicated Cloud instances on AWS & Azure are now managed using the new W&B Kubernetes Operator. With that, the new Parquet Store service is also available. The service allows for performant & cost efficient storage of run history data in parquet format in the blob storage. That in turn leads to faster loading of relevant history data in charts & plots that are used to evaluate the runs.
W&B Kubernetes Operator and along with that the Parquet Store service are now available for use in customer-managed instances. We encourage customers that already use Kubernetes to host W&B, to reach out to their W&B team on how they can use the operator. And we highly recommend others to migrate to Kubernetes in order to receive the latest performance improvements and new services in future via operator. We’re happy to assist with planning such a migration.
Fixes
Properly pass template variables through sweep scheduler
Scheduler polluting sweep yaml generator
Display user roles correctly on team members page when search or sort is applied
Org admins can again delete personal projects in their Dedicated Cloud or Self-managed server instance
Add validation for SCIM GET groups API for pending users
1.7.10 - 0.49.0
January 18, 2024
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Feature
Set a default TTL (time-to-live or scheduled deletion) policy for a team in the team settings page.
Restrict setting or editing of a TTL policy to either of admin only or admin plus members.
Test and debug a webhook during webhook creation or after in the team settings UI.
W&B will send a dummy payload and display the receiving server’s response.
View Automation properties in the View Details slider.
This includes a summary of the triggering event and action, action configs, creation date, and a copy-able curl command to test webhook automations.
Replace agent heartbeat with last successful run time in launch overview.
Service accounts can now use the Report API to create reports.
Use the new role management API to automate managing the custom roles.
Enable Kubernetes Operator for Dedicated Cloud deployments on AWS.
Configure a non-conflicting IP address range for managed cache used in Dedicated Cloud deployments on GCP.
Fixes
Update the add runset button clickable area in reports
Show proper truncate grouping message
Prevent flashing of publish button in reports
Horizonal Rule get collapsed in report section
Add section button hidden in certain views
Allow things like semantic versioning in the plot as string
Remove requirements for quotes when using template variables in queue config definitions
Improve Launch queue sorting order
Don’t auto-open panel sections when searching large workspaces
Change label text for grouped runs
Open/close all sections while searching
1.7.11 - 0.48.0
December 20, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Feature
All required frontend changes for launch prioritization
Refer to this blog on how you can run more important jobs than others using Launch.
Refer to below changes for access control and user attribution behavior of team service accounts:
When a team is configured in the training environment, a service account from that team can be used to log runs in either of private or public projects within that team, and additionally attribute those runs to a user only if WANDB_USERNAME or WANDB_USER_EMAIL variable is configured in the environment and the user is part of that team.
When a team is not configured in the training environment and a service account is used, the runs would be logged to the named project within the service account’s team, and those would be attributed to a user only if WANDB_USERNAME or WANDB_USER_EMAIL variable is configured in the environment and the user is part of that team.
A team service account can not log runs in a private project in another team, but it can log runs to public projects in other teams.
Fixes
Reduce column widths for oversized runs selectors
Fix a couple of bugs related to Custom Roles preview feature
1.7.12 - 0.47.3
December 08, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Fixes
We’re releasing a couple of important fixes for the Custom Roles preview capability that was launched as part of v0.47.2. If you’re interested in using that feature to create fine-grained roles and better align with least privilege principle, please use this latest server release and reach out to your Weights & Biases team for an updated enterprise license.
1.7.13 - 0.47.2
December 01, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Feature
Use custom roles with specific permissions to customize access control within a team
Available in preview to enterprise customers. Please reach out to your Weights & Biases account team or support for any questions.
Also:
Minor runs search improvements
Auto-resize runs search for long texts
View webhook details, including URL, secrets, date created directly from the automations table for webhook automations
Fixes
Grouping of runs when group value is a string that looks like a number
Janky report panel dragging behavior
Update bar chart spec to match the one on public cloud
Clean up panel padding and plot margins
Restores workspace settings beta
1.7.14 - 0.46.0
November 15, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
Deployments on AWS can now use W&B Secrets with Webhooks and Automations
Secrets are stored securely in AWS Secret Manager - please use the terraform-aws-wandb module to provision one and
Update webhooks table to display more information
Better truncation of long strings to improve the usability of strings in the UI
Reduce delay for scroll to report section
Add white background to weave1 panels
Allow deep link for weave1 panels in reports
Allow weave1 panel resizing in reports
Homepage banner will now show CLI login instructions
User invites will now succeed if invite email can’t be sent for some reason
Add list of associated queues to agent overview page
Fixes
Copy function on panel overlay was dropping values
CSS cleanup for import modal when creating report
Fixes regression to hide legend when toggled off
Report comment highlighting
Remove all caching for view’s LoadMetadataList()
Let run search stretch
Associate launch agents with user id from X-WANDB-USERNAME header
1.7.15 - 0.45.0
October 25, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
Enable artifact garbage collection using environment variable GORILLA_ARTIFACT_GC_ENABLED=true and cloud object versioning or soft deletion.
The terraform module terrraform-azurerm-wandb now supports Azure Key Vault as a secrets store.
Deployments on Azure can now use W&B Secrets with Webhooks and Automations. Secrets are stored securely in Azure Key Vault.
Fixes
Remove invalid early exit preventing history deletion
When moving/copying runs, don’t drop key-set info
Update mutations to no longer use defunct storage plan or artifacts billing plan at all
Respect skip flag in useRemoteServer
1.7.16 - 0.44.1
October 12, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
Add OpenAI proxy UI to SaaS and Server
Also:
New version v1.19.0 of our AWS Terraform module terraform-google-wandb is available
Add support for AWS Secret Manager for Customer Secret Store, which can be enabled after the terraform module terrraform-aws-wandb is updated and released
Add support for Azure Key Vault for Customer Secret Store, which can be enabled after the terraform module terrraform-azurerm-wandb is updated and released
Fixes
Quality-of-life improvements in the model registry ui
int values no longer ignored when determining if a run achieved a sweep’s optimization goal
Cache runs data to improve workspace loading perf
Improve TTL rendering in collection table
Allow service accounts to be made workflow (registry) admins
Add tooltip for truncated run tags in workspaces
Fix report page scrolling
Copy y data values for chart tooltip
Query secrets for webhooks in local
Fixing broken domain zoom in panel config
Hide Customer Secret Store UI if GORILLA_CUSTOMER_SECRET_STORE_SOURCE env var not set
Chores
Bump langchain to latest
Adding WB Prompts to quickstart
Update AWS MIs to use terraform-kubernetes-wandb v1.12.0
Show correct Teams Plan tracked hours teams settings page and hide on usage page
1.7.17 - 0.43.0
October 02, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Release 0.43.0 contains a number of minor bug fixes and performance improvements, including fixing the bottom of runs tables when there’s a scrollbar. Check out the other fixes below:
Fixes
Dramatically improve workspace loading perf
Fixing broken docs link in disabled add panel menu
Render childPanel without editor in report
Copying text from a panel grid when editing
Run overview crashing with ’length’ key
Padding for bottom of runs table when there’s a scrollbar
Eliminate unnecessary history key cache read
Error handling for Teams Checkout modal
Memory leak, excess filestream sending, and orphaned processes in Weave Python autotracer
1.7.18 - 0.42.0
September 14, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
W&B Artifacts now supports time-to-live (TTL) policies
Users can now gain more control over deleting and retention of Artifacts logged with W&B, with the ability to set retention and time-to-live (TTL) policies! Determine when you want specific Artifacts to be deleted, update policies on existing Artifacts, and set TTL policies on upstream or downstream Artifacts.
Here are the other new features include in this release:
Use Launch drawer when creating Sweeps
Delete run queue items
Min/max aggregations nested dropdown
Allow users to connect multiple S3-compatible buckets
Add disk i/o system metrics
Use the legacy way to set permissions
Enable CustomerSecretStore
Add Kubernetes as a backend for CustomerSecretStore
Fixes
Disable storage and artifact invoices for ongoing storage calculations refractors
Panel deletion bug
Remove link-version event type from project automation slider
Remove upper case styling for artifact type names
Keep uncolored tags from changing color on render
Stale defaults stuck in Launch drawer on reopen
Trigger alias automations while creating artifact
Edge case failure in infinite loading tag filters
1.7.19 - 0.41.0
August 28, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
New Launch landing page
We’ve updated the Launch home page, so users looking to get started with Launch will have a much easier way to get setup quickly. Easily access detailed documentation, or simply follow the three Quickstart steps to create a Launch queue, agent, and start launching jobs immediately.
Here are the other new features included in this release:
Add new reverse proxy to track OpenAI requests and responses
Show agent version on agent overview page
New model registry workflow removed from feature flag for all users
Fixes
Empty projects causing infinite load on storage explorer
Runs marked failed when run queue items are failed
Use correct bucket for storing OpenAI proxy artifacts
SEO tags not properly rendered by host
Trigger export in background, on context deadline as well
Transition runs in pending state to running when run is initialized
Query so Launch queues show most recent completed and failed jobs
1.7.20 - 0.40.0
August 18, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
Webhooks
Enable a seamless model CI/CD workflow using Webhook Automations to trigger specific actions within the CI/CD pipeline when certain events occur. Use webhooks to facilitate a clean hand-off point between ML engineering and devops. To see this in practice for Model Evaluation and Model Deployment, check out the linked demo videos. Learn more in our docs.
New user activity dashboard on for all customers
Fixes
Removed limit on number of registered models an organization could have.
Added search history to workspaces to make it easier to find commonly used plots.
Changed reports “like” icon from hearts to stars.
Users can now change the selected run in a workspace view with a group of runs.
Fixed issue causing duplicate panel grids.
Users can now pass in per-job resource config overrides for Sweeps on Launch
Added redirect from /admin/users to new organization dashboard.
Fixed issues with LDAP dropping connections.
Improvements to run permadeletion.
1.7.21 - 0.39.0
July 27, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
Revamped Organization Dashboard
We’ve made it easier to see who’s making the most W&B with our overhauled Organization Dashboard, accessible to W&B admins. You can now see details on who’s created runs and reports, who’s actively using W&B, who’s invites are pending–and you can export all this in CSV to share across your organization. Learn more in the docs.
For Dedicated Cloud customers, this feature has been turned on. For Customer-Managed W&B customers, contact W&B support and we’ll be happy to work with you to enable it.
Fixes
Restrict service API keys to team admins
Launch agent configuration is now shown on the Agents page
Added navigation panel while viewing a single Launch job.
Automations can now show configuration parameters for the associated job.
Fixed issue with grouped runs not live updating
Removed extra / in magic and normal link url
Check base for incremental artifacts
Inviting a user into multiple teams will no longer take up too many seats in the org
1.7.22 - 0.38.0
July 13, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
Metric visualization enhancements
We’re continuing to enhance our core metric visualization experience. You can now define which metrics from regular expressions to render in your plots, up to 100 metrics at once. And to more accurately represent data at high scale, we’ve add a new time-weighted exponential moving average smoothing algorithm for plots (check out all of our supported algorithms).
Feedback surveys
W&B has always built our product based on customer feedback. Now, we’re happy to introduce a new way for you to shape the future of W&B: in-app feedback surveys in your Dedicated Cloud or Customer-Managed W&B install. Starting July 17th, W&B users will start periodically seeing simple 1 - 10 Net Promoter Score surveys in the application. All identifying information is anonymized. We appreciate all your feedback and look forward to making W&B even better, together.
Fixes
Major improvement to artifact download speed: over a 6x speedup on our 1-million-file artifact benchmark. Please upgrade to SDK version 0.15.5+.
Run data permadeletion is now available (default off). This can be enabled with the GORILLA_DATA_RETENTION_PERIOD environment variable, specified in hours. Please take care before updating this variable and/or chat with W&B Support, since the deletion is permanent. Artifacts will not be deleted by this setting.
Updated report sharing emails to include a preview.
Relaxed HTML sanitation rules for reports in projects; this had been causing rare problems with report rendering.
Expanded the maximum number of metrics that can be matched by a regex in chart configuration; previously this had been always 10, the maximum is now 100.
Fixed issue with media panel step slider becoming unsynced with the media shown.
Added time-weighted exponential moving average as an option for smoothing in plots.
The “Search panels” textbox in workspaces now preserves the user’s last search.
Applying a username filter when runs are grouped will no longer error.
(Launch) The loading of the Launch tab should now be much faster, typically under two seconds.
(Launch) There’s now an option to edit queue configs using YAML instead of JSON. It’s also now more clear how to edit queue configs.
(Launch) Runs will now show error messages in the UI when they crash or fail.
(Launch) If you don’t specify a project when creating a job, we’ll now use the value for WANDB_PROJECT from your wandb.init.
(Launch) Updated support for custom accelerator images—these will run in noninteractive mode when building, which had been blocking some images.
(Launch) Fixed issue where the run author for sweeps was the agent service account, rather than the real author
(Launch) Clicking outside the Launch drawer will no longer close the drawer automatically.
(Launch) Fixed issue where training jobs that had been enqueued by a sweep but not run yet were not correctly removed from the queue if you later stopped the sweep.
(Launch) The Launch navigation link is now hidden for users who aren’t part of the team.
(Launch) Fixed formatting and display issues on Agent logs.
Fixed scrolling, resizing, and cloning issues in Automations panel.
Fixed pagination on artifact action history.
Added support for pre-signed URLs using a VPC endpoint URL if the AWS_S3_ENDPOINT_URL env var is set and passed in from the SDK side.
Fixed enterprise dashboard link when organization name contains “&”
Updated tag colors to be consistent
1.7.23 - 0.36.0
June 14, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
Clone Runs with Launch
If you want to repeat a run but tweak a couple hyperparameters–say bump the batch size to take advantage of a larger machine–it’s now easy to clone a run using W&B Launch. Go to the run overview, click Clone, and you’ll be able to select new infrastructure to execute the job on, with new hyperparameters. Learn more in the Launch documentation.
Fixes
Added report creation and update action to audit logs.
Artifacts read through the SDK will now be captured in the audit logs.
In report creation, added button to select all plots to add to the new report
New view-only users signing up via a report link will now be fast tracked to the report, rather than going through the normal signup process.
Team admins can now add protected aliases.
Improved media panel handling of intermediate steps.
Removed inactive ‘New Model’ button from Model Registry homepage for anonymous users
Ability to copy data from plot legends has been rolled out to all users.
Fixed incorrect progress indicator in Model Registry onboarding checklist.
Fixed issue where the Automations page could crash when job name had slashes.
Fixed issue where a user could update the wrong user profiles.
Added option to permanently delete runs and their associated metrics after a duration specified in an environment variable.
1.7.24 - 0.35.0
June 07, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Security
Fixed issue where API keys were logged for recently logged in users. Check for FetchAuthUserByAPIKey in the logs which you can find in gorilla.log from a debug bundle and rotate any keys that are found.
Features
Launch Agent Logs Now in the GUI
W&B Launch allows you to push machine learning jobs to a wide range of specialized compute environments. With this update, you can now use W&B to monitor and debug jobs running in these remote environments, without needing to log into your AWS or GCP console.
Fixes
Logs tab is no longer trimmed to 1000 rows.
Fixed scenario where artifact files pagination could get into an infinite loop
Fixed bug where success toast messages were not appearing
The Runs table will now correctly show the git commit value
1.7.25 - 0.34.0
May 31, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
New Model Registry UI
We’re making it easier for users to manage a long list of models, and navigate seamlessly between entities in the model registry. With this new UI, users can:
Look at all your registered models
Filter to registered models within a specific team
With the new list view, users can expand each panel to see the individual versions inside of it, including each version’s aliases, and metadata or run metrics. Clicking on a version from this quick view can take you to it’s version-view
Look at an overview directly by clicking “View Details”
See a preview of how many version, consumers, and automations are present for each registered model
Create Automations directly
See some metadata columns and details in preview
Change Model Access Controls
Fixes
Improved search functionality for better universal search ranking results.
Added functionality to add/delete multiple tags at once in the model registry
Enhanced the FileMarkdown feature to correctly scroll long content.
Made the default team selection dropdown scrollable.
Removed the UI access restriction for Tier 1/2/3 plans based on tracked hour usage.
Added tooltips to for LLM trace viewer spans
LLM trace timeline/detail now splits horizontally in fullscreen
Added entity / team badges to Model Registry entries.
Improved the navigation bar experience for logged out users
Disabled storage/artifact banners to avoid issue where UI blocks for orgs with excess artifacts.
Fixed issues where user avatars were not being displayed correctly.
Fixed issue using Launch with Azure Git URLs
Launch configuration boxes now work in airgapped environments
In Launch queue creation, show teams as disabled (rather than hidden) for non-admins.
Fixed issue with embedding projector rendering
Fixes issue that prevented users from being able to reset their password in some cases involving mixed-case usernames.
Files with special characters now show up in the media panel in Azure
Added the ability to override the inline display format for timestamps.
Reports with custom charts now load when not logged in.
Wide GIFs no longer overflow fullscreen view
Increase default automations limit from 20 to 200.
Fixed bug allowing the appearance of deleting the version alias of a registered model (in fact, this could not be deleted on the backend).
1.7.26 - 0.33.0
May 10, 2023
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
Features
Prompts: Zoom and pan
Explore complex chains of LLM prompts more easily with new zoom and pan controls in our prompts tracer.
Model registry admin role
Control your model promotion process with a new role for model registry admins. These users can manage the list of protected aliases (for example, “challenger” or “prod”), as well as apply or remove protected aliases for model versions.
Viewer role
You can now share your W&B findings with a broader audience with the introduction of a Viewer role for W&B Server. Users with this role can view anything their team(s) make, but not create, edit, or delete anything. These seats are measured separately from traditional W&B Server seats, so reach out your W&B account team to request an updated license.
Improved sharing: optional magic link, and easier signup for viewers
Team admins can now disable magic link sharing for a team and its members. Disable public sharing on the team setting allows you increase team privacy controls. Meanwhile, it’s now easier for users who receive a report link to access the report in W&B after signing up.
Improved report composition
Reports help share your findings W&B further throughout an organization, including with people outside the ML team. We’ve made several investments to ensure it’s as simple and frictionless as possible to create and share them—including an improved report drafting experience with enhanced draft publication, editing, management, and sharing UX to improve how teams collaborate with Reports.
Updated navigation
As W&B has expanded the parts of the ML workflow we cover, we’ve heard your feedback that it can be hard to move around the application. So we’ve updated the navigation sidebar to include clearer labels on the product area, and added backlinks to certain detail screens. We’ve also renamed “Triggers” to “Automations” to better reflect the power of the feature.
Fixes
When hovering over a plot in workspaces or a report, you can now use Cmd+C or Ctrl+C to copy run names and plot values shown in the hover control.
Changes to default workspaces are now no longer auto-saved.
Metrics in the Overview → Summary section now are formatted with commas.
Added an install-level option to allow non-admin users to create teams (default off; contact W&B support to enable it).
Weave plots now support log scales.
The Launch panel can now be expanded horizontally to give more space for viewing parameters.
The Launch panel now indicates whether a queue is active
The Launch panel now allows you to choose a project for the run to be logged in.
Launch queues can now only be created by team admins.
Improved Markdown support in Launch panel.
Improved error message on empty Launch queue configurations.
Filters on the Sweeps parallel coordinates plot will now apply to all selected runsets.
Sweeps now no longer require a metric.
Added support for tracking reference artifact files saved outside W&B in Azure Blob Storage.
Fixed bug in Markdown editing in Reports
Fullscreen Weave panels can now share config changes with the original panel.
Improved display of empty tables
Fixed bug in which the first several characters of logs were cut off
1.7.27 -
This release is no longer supported. A major release and its patches are supported for six months from the initial release date.
Customers using Self-managed are responsible to upgrade to a supported release in time to maintain support. For assistance or questions, contact support.
2 - Command Line Interface
Usage
wandb [OPTIONS] COMMAND [ARGS]...
Options
Option
Description
--version
Show the version and exit.
Commands
Command
Description
agent
Run the W&B agent
artifact
Commands for interacting with artifacts
beta
Beta versions of wandb CLI commands.
controller
Run the W&B local sweep controller
disabled
Disable W&B.
docker
Run your code in a docker container.
docker-run
Wrap docker run and adds WANDB_API_KEY and WANDB_DOCKER…
enabled
Enable W&B.
init
Configure a directory with Weights & Biases
job
Commands for managing and viewing W&B jobs
launch
Launch or queue a W&B Job.
launch-agent
Run a W&B launch agent.
launch-sweep
Run a W&B launch sweep (Experimental).
login
Login to Weights & Biases
offline
Disable W&B sync
online
Enable W&B sync
pull
Pull files from Weights & Biases
restore
Restore code, config and docker state for a run
scheduler
Run a W&B launch sweep scheduler (Experimental)
server
Commands for operating a local W&B server
status
Show configuration settings
sweep
Initialize a hyperparameter sweep.
sync
Upload an offline training directory to W&B
verify
Verify your local instance
2.1 - wandb agent
Usage
wandb agent [OPTIONS] SWEEP_ID
Summary
Run the W&B agent
Options
Option
Description
-p, --project
The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled ‘Uncategorized’.
-e, --entity
The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.
--count
The max number of runs for this agent.
2.2 - wandb artifact
Usage
wandb artifact [OPTIONS] COMMAND [ARGS]...
Summary
Commands for interacting with artifacts
Options
Option
Description
Commands
Command
Description
cache
Commands for interacting with the artifact cache
get
Download an artifact from wandb
ls
List all artifacts in a wandb project
put
Upload an artifact to wandb
2.2.1 - wandb artifact cache
Usage
wandb artifact cache [OPTIONS] COMMAND [ARGS]...
Summary
Commands for interacting with the artifact cache
Options
Option
Description
Commands
Command
Description
cleanup
Clean up less frequently used files from the artifacts cache
W&B docker lets you run your code in a docker image ensuring wandb is
configured. It adds the WANDB_DOCKER and WANDB_API_KEY environment variables
to your container and mounts the current directory in /app by default. You
can pass additional args which will be added to docker run before the
image name is declared, we’ll choose a default image for you if one isn’t
passed:
images-public/tensorflow-1.12.0-notebook-cpu:v0.4.0 --jupyter wandb docker
wandb/deepo:keras-gpu --no-tty --cmd "python train.py --epochs=5"```By default, we override the entrypoint to check for the existence of wandb
and install it if not present. If you pass the --jupyter flag we will
ensure jupyter is installed and start jupyter lab on port 8888. If we
detect nvidia-docker on your system we will use the nvidia runtime. If you
just want wandb to set environment variable to an existing docker run
command, see the wandb docker-run command.
**Options**
| **Option** | **Description** |
| :--- | :--- |
| `--nvidia / --no-nvidia` | Use the nvidia runtime, defaults to nvidia if nvidia-docker is present |
| `--digest` | Output the image digest and exit |
| `--jupyter / --no-jupyter` | Run jupyter lab in the container |
| `--dir` | Which directory to mount the code in the container |
| `--no-dir` | Don't mount the current directory |
| `--shell` | The shell to start the container with |
| `--port` | The host port to bind jupyter on |
| `--cmd` | The command to run in the container |
| `--no-tty` | Run the command without a tty |
2.7 - wandb docker-run
Usage
wandb docker-run [OPTIONS] [DOCKER_RUN_ARGS]...
Summary
Wrap docker run and adds WANDB_API_KEY and WANDB_DOCKER environment
variables.
This will also set the runtime to nvidia if the nvidia-docker executable is
present on the system and –runtime wasn’t set.
See docker run --help for more details.
Options
Option
Description
2.8 - wandb enabled
Usage
wandb enabled [OPTIONS]
Summary
Enable W&B.
Options
Option
Description
--service
Enable W&B service [default: True]
2.9 - wandb init
Usage
wandb init [OPTIONS]
Summary
Configure a directory with Weights & Biases
Options
Option
Description
-p, --project
The project to use.
-e, --entity
The entity to scope the project to.
--reset
Reset settings
-m, --mode
Can be “online”, “offline” or “disabled”. Defaults to online.
2.10 - wandb job
Usage
wandb job [OPTIONS] COMMAND [ARGS]...
Summary
Commands for managing and viewing W&B jobs
Options
Option
Description
Commands
Command
Description
create
Create a job from a source, without a wandb run.
describe
Describe a launch job.
list
List jobs in a project
2.10.1 - wandb job create
Usage
wandb job create [OPTIONS] {git|code|image} PATH
Summary
Create a job from a source, without a wandb run.
Jobs can be of three types, git, code, or image.
git: A git source, with an entrypoint either in the path or provided
explicitly pointing to the main python executable. code: A code path,
containing a requirements.txt file. image: A docker image.
Options
Option
Description
-p, --project
The project you want to list jobs from.
-e, --entity
The entity the jobs belong to
-n, --name
Name for the job
-d, --description
Description for the job
-a, --alias
Alias for the job
--entry-point
Entrypoint to the script, including an executable and an entrypoint file. Required for code or repo jobs. If –build-context is provided, paths in the entrypoint command will be relative to the build context.
-g, --git-hash
Commit reference to use as the source for git jobs
-r, --runtime
Python runtime to execute the job
-b, --build-context
Path to the build context from the root of the job source code. If provided, this is used as the base path for the Dockerfile and entrypoint.
--base-image
Base image to use for the job. Incompatible with image jobs.
--dockerfile
Path to the Dockerfile for the job. If –build- context is provided, the Dockerfile path will be relative to the build context.
2.10.2 - wandb job describe
Usage
wandb job describe [OPTIONS] JOB
Summary
Describe a launch job. Provide the launch job in the form of:
entity/project/job-name:alias-or-version
Local path or git repo uri to launch. If provided this command will create a job from the specified uri.
-j, --job (str)
Name of the job to launch. If passed in, launch does not require a uri.
--entry-point
Entry point within project. [default: main]. If the entry point is not found, attempts to run the project file with the specified name as a script, using ‘python’ to run .py files and the default shell (specified by environment variable $SHELL) to run .sh files. If passed in, will override the entrypoint value passed in using a config file.
--build-context (str)
Path to the build context within the source code. Defaults to the root of the source code. Compatible only with -u.
--name
Name of the run under which to launch the run. If not specified, a random run name will be used to launch run. If passed in, will override the name passed in using a config file.
-e, --entity (str)
Name of the target entity which the new run will be sent to. Defaults to using the entity set by local wandb/settings folder. If passed in, will override the entity value passed in using a config file.
-p, --project (str)
Name of the target project which the new run will be sent to. Defaults to using the project name given by the source uri or for github runs, the git repo name. If passed in, will override the project value passed in using a config file.
-r, --resource
Execution resource to use for run. Supported values: ’local-process’, ’local-container’, ‘kubernetes’, ‘sagemaker’, ‘gcp-vertex’. This is now a required parameter if pushing to a queue with no resource configuration. If passed in, will override the resource value passed in using a config file.
-d, --docker-image
Specific docker image you’d like to use. In the form name:tag. If passed in, will override the docker image value passed in using a config file.
--base-image
Docker image to run job code in. Incompatible with –docker-image.
-c, --config
Path to JSON file (must end in ‘.json’) or JSON string which will be passed as a launch config. Dictation how the launched run will be configured.
-v, --set-var
Set template variable values for queues with allow listing enabled, as key-value pairs e.g. --set-var key1=value1 --set-var key2=value2
-q, --queue
Name of run queue to push to. If none, launches single run directly. If supplied without an argument (--queue), defaults to queue ‘default’. Else, if name supplied, specified run queue must exist under the project and entity supplied.
--async
Flag to run the job asynchronously. Defaults to false, i.e. unless –async is set, wandb launch will wait for the job to finish. This option is incompatible with –queue; asynchronous options when running with an agent should be set on wandb launch-agent.
--resource-args
Path to JSON file (must end in ‘.json’) or JSON string which will be passed as resource args to the compute resource. The exact content which should be provided is different for each execution backend. See documentation for layout of this file.
--dockerfile
Path to the Dockerfile used to build the job, relative to the job’s root
`–priority [critical
high
2.12 - wandb launch-agent
Usage
wandb launch-agent [OPTIONS]
Summary
Run a W&B launch agent.
Options
Option
Description
-q, --queue
The name of a queue for the agent to watch. Multiple -q flags supported.
-e, --entity
The entity to use. Defaults to current logged-in user
-l, --log-file
Destination for internal agent logs. Use - for stdout. By default all agents logs will go to debug.log in your wandb/ subdirectory or WANDB_DIR if set.
-j, --max-jobs
The maximum number of launch jobs this agent can run in parallel. Defaults to 1. Set to -1 for no upper limit
-c, --config
path to the agent config yaml to use
-v, --verbose
Display verbose output
2.13 - wandb launch-sweep
Usage
wandb launch-sweep [OPTIONS] [CONFIG]
Summary
Run a W&B launch sweep (Experimental).
Options
Option
Description
-q, --queue
The name of a queue to push the sweep to
-p, --project
Name of the project which the agent will watch. If passed in, will override the project value passed in using a config file
-e, --entity
The entity to use. Defaults to current logged-in user
-r, --resume_id
Resume a launch sweep by passing an 8-char sweep id. Queue required
--prior_run
ID of an existing run to add to this sweep
2.14 - wandb login
Usage
wandb login [OPTIONS] [KEY]...
Summary
Login to Weights & Biases
Options
Option
Description
--cloud
Login to the cloud instead of local
--host, --base-url
Login to a specific instance of W&B
--relogin
Force relogin if already logged in.
--anonymously
Log in anonymously
--verify / --no-verify
Verify login credentials
2.15 - wandb offline
Usage
wandb offline [OPTIONS]
Summary
Disable W&B sync
Options
Option
Description
2.16 - wandb online
Usage
wandb online [OPTIONS]
Summary
Enable W&B sync
Options
Option
Description
2.17 - wandb pull
Usage
wandb pull [OPTIONS] RUN
Summary
Pull files from Weights & Biases
Options
Option
Description
-p, --project
The project you want to download.
-e, --entity
The entity to scope the listing to.
2.18 - wandb restore
Usage
wandb restore [OPTIONS] RUN
Summary
Restore code, config and docker state for a run
Options
Option
Description
--no-git
Don’t restore git state
--branch / --no-branch
Whether to create a branch or checkout detached
-p, --project
The project you wish to upload to.
-e, --entity
The entity to scope the listing to.
2.19 - wandb scheduler
Usage
wandb scheduler [OPTIONS] SWEEP_ID
Summary
Run a W&B launch sweep scheduler (Experimental)
Options
Option
Description
2.20 - wandb server
Usage
wandb server [OPTIONS] COMMAND [ARGS]...
Summary
Commands for operating a local W&B server
Options
Option
Description
Commands
Command
Description
start
Start a local W&B server
stop
Stop a local W&B server
2.20.1 - wandb server start
Usage
wandb server start [OPTIONS]
Summary
Start a local W&B server
Options
Option
Description
-p, --port
The host port to bind W&B server on
-e, --env
Env vars to pass to wandb/local
--daemon / --no-daemon
Run or don’t run in daemon mode
2.20.2 - wandb server stop
Usage
wandb server stop [OPTIONS]
Summary
Stop a local W&B server
Options
Option
Description
2.21 - wandb status
Usage
wandb status [OPTIONS]
Summary
Show configuration settings
Options
Option
Description
--settings / --no-settings
Show the current settings
2.22 - wandb sweep
Usage
wandb sweep [OPTIONS] CONFIG_YAML_OR_SWEEP_ID
Summary
Initialize a hyperparameter sweep. Search for hyperparameters that optimizes
a cost function of a machine learning model by testing various combinations.
Options
Option
Description
-p, --project
The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled Uncategorized.
-e, --entity
The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.
--controller
Run local controller
--verbose
Display verbose output
--name
The name of the sweep. The sweep ID is used if no name is specified.
--program
Set sweep program
--update
Update pending sweep
--stop
Finish a sweep to stop running new runs and let currently running runs finish.
--cancel
Cancel a sweep to kill all running runs and stop running new runs.
--pause
Pause a sweep to temporarily stop running new runs.
--resume
Resume a sweep to continue running new runs.
--prior_run
ID of an existing run to add to this sweep
2.23 - wandb sync
Usage
wandb sync [OPTIONS] [PATH]...
Summary
Upload an offline training directory to W&B
Options
Option
Description
--id
The run you want to upload to.
-p, --project
The project you want to upload to.
-e, --entity
The entity to scope to.
--job_type
Specifies the type of run for grouping related runs together.
--sync-tensorboard / --no-sync-tensorboard
Stream tfevent files to wandb.
--include-globs
Comma separated list of globs to include.
--exclude-globs
Comma separated list of globs to exclude.
--include-online / --no-include-online
Include online runs
--include-offline / --no-include-offline
Include offline runs
--include-synced / --no-include-synced
Include synced runs
--mark-synced / --no-mark-synced
Mark runs as synced
--sync-all
Sync all runs
--clean
Delete synced runs
--clean-old-hours
Delete runs created before this many hours. To be used alongside –clean flag.
--clean-force
Clean without confirmation prompt.
--show
Number of runs to show
--append
Append run
--skip-console
Skip console logs
2.24 - wandb verify
Usage
wandb verify [OPTIONS]
Summary
Verify your local instance
Options
Option
Description
--host
Test a specific instance of W&B
3 - JavaScript Library
The W&B SDK for TypeScript, Node, and modern Web Browsers
Similar to our Python library, we offer a client to track experiments in JavaScript/TypeScript.
Log metrics from your Node server and display them in interactive plots on W&B
We spawn a separate MessageChannel to process all api calls async. This will cause your script to hang if you don’t call await wandb.finish().
Node/CommonJS:
constwandb=require('@wandb/sdk').default;
We’re currently missing a lot of the functionality found in our Python SDK, but basic logging functionality is available. We’ll be adding additional features like Tables soon.
Authentication and Settings
In node environments we look for process.env.WANDB_API_KEY and prompt for it’s input if we have a TTY. In non-node environments we look for sessionStorage.getItem("WANDB_API_KEY"). Additional settings can be found here.
Integrations
Our Python integrations are widely used by our community, and we hope to build out more JavaScript integrations to help LLM app builders leverage whatever tool they want.
If you have any requests for additional integrations, we’d love you to open an issue with details about the request.
LangChain.js
This library integrates with the popular library for building LLM applications, LangChain.js version >= 0.0.75.
import {WandbTracer} from'@wandb/sdk/integrations/langchain';
constwbTracer=awaitWandbTracer.init({project:'langchain-test'});
// run your langchain workloads...
chain.call({input:"My prompt"}, wbTracer)
awaitWandbTracer.finish();
We spawn a seperate MessageChannel to process all api calls async. This will cause your script to hang if you don’t call await WandbTracer.finish().
The sweep agent uses the sweep_id to know which sweep it
is a part of, what function to execute, and (optionally) how
many agents to run.
Args
sweep_id
The unique identifier for a sweep. A sweep ID is generated by W&B CLI or Python SDK.
function
A function to call instead of the “program” specified in the sweep config.
entity
The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.
project
The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled “Uncategorized”.
Construct an empty W&B Artifact. Populate an artifacts contents with methods that
begin with add. Once the artifact has all the desired files, you can call
wandb.log_artifact() to log it.
Args
name
A human-readable name for the artifact. Use the name to identify a specific artifact in the W&B App UI or programmatically. You can interactively reference an artifact with the use_artifact Public API. A name can contain letters, numbers, underscores, hyphens, and dots. The name must be unique across a project.
type
The artifact’s type. Use the type of an artifact to both organize and differentiate artifacts. You can use any string that contains letters, numbers, underscores, hyphens, and dots. Common types include dataset or model. Include model within your type string if you want to link the artifact to the W&B Model Registry.
description
A description of the artifact. For Model or Dataset Artifacts, add documentation for your standardized team model or dataset card. View an artifact’s description programmatically with the Artifact.description attribute or programmatically with the W&B App UI. W&B renders the description as markdown in the W&B App.
metadata
Additional information about an artifact. Specify metadata as a dictionary of key-value pairs. You can specify no more than 100 total keys.
incremental
Use Artifact.new_draft() method instead to modify an existing artifact.
use_as
W&B Launch specific parameter. Not recommended for general use.
Returns
An Artifact object.
Attributes
aliases
List of one or more semantically-friendly references or identifying “nicknames” assigned to an artifact version. Aliases are mutable references that you can programmatically reference. Change an artifact’s alias with the W&B App UI or programmatically. See Create new artifact versions for more information.
collection
The collection this artifact was retrieved from. A collection is an ordered group of artifact versions. If this artifact was retrieved from a portfolio / linked collection, that collection will be returned rather than the collection that an artifact version originated from. The collection that an artifact originates from is known as the source sequence.
commit_hash
The hash returned when this artifact was committed.
created_at
Timestamp when the artifact was created.
description
A description of the artifact.
digest
The logical digest of the artifact. The digest is the checksum of the artifact’s contents. If an artifact has the same digest as the current latest version, then log_artifact is a no-op.
entity
The name of the entity of the secondary (portfolio) artifact collection.
file_count
The number of files (including references).
history_step
The nearest step at which history metrics were logged for the source run of the artifact.
id
The artifact’s ID.
manifest
The artifact’s manifest. The manifest lists all of its contents, and can’t be changed once the artifact has been logged.
metadata
User-defined artifact metadata. Structured data associated with the artifact.
name
The artifact name and version in its secondary (portfolio) collection. A string with the format {collection}:{alias}. Before the artifact is saved, contains only the name since the version is not yet known.
project
The name of the project of the secondary (portfolio) artifact collection.
qualified_name
The entity/project/name of the secondary (portfolio) collection.
size
The total size of the artifact in bytes. Includes any references tracked by this artifact.
source_collection
The artifact’s primary (sequence) collection.
source_entity
The name of the entity of the primary (sequence) artifact collection.
source_name
The artifact name and version in its primary (sequence) collection. A string with the format {collection}:{alias}. Before the artifact is saved, contains only the name since the version is not yet known.
source_project
The name of the project of the primary (sequence) artifact collection.
source_qualified_name
The entity/project/name of the primary (sequence) collection.
source_version
The artifact’s version in its primary (sequence) collection. A string with the format v{number}.
state
The status of the artifact. One of: “PENDING”, “COMMITTED”, or “DELETED”.
tags
List of one or more tags assigned to this artifact version.
ttl
The time-to-live (TTL) policy of an artifact. Artifacts are deleted shortly after a TTL policy’s duration passes. If set to None, the artifact deactivates TTL policies and will be not scheduled for deletion, even if there is a team default TTL. An artifact inherits a TTL policy from the team default if the team administrator defines a default TTL and there is no custom policy set on an artifact.
type
The artifact’s type. Common types include dataset or model.
updated_at
The time when the artifact was last updated.
url
Constructs the URL of the artifact.
version
The artifact’s version in its secondary (portfolio) collection.
The object to add. Currently support one of Bokeh, JoinedTable, PartitionedTable, Table, Classes, ImageMask, BoundingBoxes2D, Audio, Image, Video, Html, Object3D
name
The path within the artifact to add the object.
overwrite
If True, overwrite existing objects with the same file path (if applicable).
Returns
The added manifest entry
Raises
ArtifactFinalizedError
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
The subdirectory name within an artifact. The name you specify appears in the W&B App UI nested by artifact’s type. Defaults to the root of the artifact.
skip_cache
If set to True, W&B will not copy/move files to the cache while uploading
policy
“mutable”
Raises
ArtifactFinalizedError
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
The path within the artifact to use for the file being added. Defaults to the basename of the file.
is_tmp
If true, then the file is renamed deterministically to avoid collisions.
skip_cache
If True, W&B will not copy files to the cache after uploading.
policy
By default, set to “mutable”. If set to “mutable”, create a temporary copy of the file to prevent corruption during upload. If set to “immutable”, disable protection and rely on the user not to delete or change the file.
overwrite
If True, overwrite the file if it already exists.
Returns
The added manifest entry.
Raises
ArtifactFinalizedError
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
Unlike files or directories that you add to an artifact, references are not
uploaded to W&B. For more information,
see Track external files.
By default, the following schemes are supported:
http(s): The size and digest of the file will be inferred by the
Content-Length and the ETag response headers returned by the server.
s3: The checksum and size are pulled from the object metadata. If bucket
versioning is enabled, then the version ID is also tracked.
gs: The checksum and size are pulled from the object metadata. If bucket
versioning is enabled, then the version ID is also tracked.
https, domain matching *.blob.core.windows.net (Azure): The checksum and size
are be pulled from the blob metadata. If storage account versioning is
enabled, then the version ID is also tracked.
file: The checksum and size are pulled from the file system. This scheme
is useful if you have an NFS share or other externally mounted volume
containing files you wish to track but not necessarily upload.
For any other scheme, the digest is just a hash of the URI and the size is left
blank.
Args
uri
The URI path of the reference to add. The URI path can be an object returned from Artifact.get_entry to store a reference to another artifact’s entry.
name
The path within the artifact to place the contents of this reference.
checksum
Whether or not to checksum the resource(s) located at the reference URI. Checksumming is strongly recommended as it enables automatic integrity validation. Disabling checksumming will speed up artifact creation but reference directories will not iterated through so the objects in the directory will not be saved to the artifact. We recommend setting checksum=False when adding reference objects, in which case a new version will only be created if the reference URI changes.
max_objects
The maximum number of objects to consider when adding a reference that points to directory or bucket store prefix. By default, the maximum number of objects allowed for Amazon S3, GCS, Azure, and local files is 10,000,000. Other URI schemas do not have a maximum.
Returns
The added manifest entries.
Raises
ArtifactFinalizedError
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
If called on a linked artifact (i.e. a member of a portfolio collection): only the link is deleted, and the
source artifact is unaffected.
Args
delete_aliases
If set to True, deletes all aliases associated with the artifact. Otherwise, this raises an exception if the artifact has existing aliases. This parameter is ignored if the artifact is linked (i.e. a member of a portfolio collection).
Download the contents of the artifact to the specified root directory.
Existing files located within root are not modified. Explicitly delete root
before you call download if you want the contents of root to exactly match
the artifact.
Args
root
The directory W&B stores the artifact’s files.
allow_missing_references
If set to True, any invalid reference paths will be ignored while downloading referenced files.
skip_cache
If set to True, the artifact cache will be skipped when downloading and W&B will download each file into the default root or specified download directory.
path_prefix
If specified, only files with a path that starts with the given prefix will be downloaded. Uses unix format (forward slashes).
You cannot modify an artifact version once it is finalized because the artifact
is logged as a specific artifact version. Create a new artifact version
to log more data to an artifact. An artifact is automatically finalized
when you log the artifact with log_artifact.
Link this artifact to a portfolio (a promoted collection of artifacts).
Args
target_path
The path to the portfolio inside a project. The target path must adhere to one of the following schemas {portfolio}, {project}/{portfolio} or {entity}/{project}/{portfolio}. To link the artifact to the Model Registry, rather than to a generic portfolio inside a project, set target_path to the following schema {"model-registry"}/{Registered Model Name} or {entity}/{"model-registry"}/{Registered Model Name}.
aliases
A list of strings that uniquely identifies the artifact inside the specified portfolio.
Create a new draft artifact with the same content as this committed artifact.
Modifying an existing artifact creates a new artifact version known
as an “incremental artifact”. The artifact returned can be extended or
modified and logged as a new version.
The item to remove. Can be a specific manifest entry or the name of an artifact-relative path. If the item matches a directory all items in that directory will be removed.
Raises
ArtifactFinalizedError
You cannot make changes to the current artifact version because it is finalized. Log a new artifact version instead.
For more on logging structured data for interactive dataset and model analysis,
see our guide to W&B Tables.
All of these special data types are subclasses of WBValue. All the data types
serialize to JSON, since that is what wandb uses to save the objects locally
and upload them to the W&B server.
Format images with 2D bounding box overlays for logging to W&B.
BoundingBoxes2D(
val: dict,
key: str
) ->None
Args
val
(dictionary) A dictionary of the following form: box_data: (list of dictionaries) One dictionary for each bounding box, containing: position: (dictionary) the position and size of the bounding box, in one of two formats Note that boxes need not all use the same format. {“minX”, “minY”, “maxX”, “maxY”}: (dictionary) A set of coordinates defining the upper and lower bounds of the box (the bottom left and top right corners) {“middle”, “width”, “height”}: (dictionary) A set of coordinates defining the center and dimensions of the box, with “middle” as a list [x, y] for the center point and “width” and “height” as numbers domain: (string) One of two options for the bounding box coordinate domain null: By default, or if no argument is passed, the coordinate domain is assumed to be relative to the original image, expressing this box as a fraction or percentage of the original image. This means all coordinates and dimensions passed into the “position” argument are floating point numbers between 0 and 1. “pixel”: (string literal) The coordinate domain is set to the pixel space. This means all coordinates and dimensions passed into “position” are integers within the bounds of the image dimensions. class_id: (integer) The class label id for this box scores: (dictionary of string to number, optional) A mapping of named fields to numerical values (float or int), can be used for filtering boxes in the UI based on a range of values for the corresponding field box_caption: (string, optional) A string to be displayed as the label text above this box in the UI, often composed of the class label, class name, and/or scores class_labels: (dictionary, optional) A map of integer class labels to their readable class names
key
(string) The readable name or id for this set of bounding boxes (e.g. predictions, ground_truth)
Examples:
Log bounding boxes for a single image
import numpy as np
import wandb
run = wandb.init()
image = np.random.randint(low=0, high=256, size=(200, 300, 3))
class_labels = {0: "person", 1: "car", 2: "road", 3: "building"}
img = wandb.Image(
image,
boxes={
"predictions": {
"box_data": [
{
# one box expressed in the default relative/fractional domain"position": {
"minX": 0.1,
"maxX": 0.2,
"minY": 0.3,
"maxY": 0.4,
},
"class_id": 1,
"box_caption": class_labels[1],
"scores": {"acc": 0.2, "loss": 1.2},
},
{
# another box expressed in the pixel domain"position": {
"middle": [150, 20],
"width": 68,
"height": 112,
},
"domain": "pixel",
"class_id": 3,
"box_caption": "a building",
"scores": {"acc": 0.5, "loss": 0.7},
},
# Log as many boxes an as needed ],
"class_labels": class_labels,
}
},
)
run.log({"driving_scene": img})
This class is typically used for saving and displaying neural net models. It
represents the graph as an array of nodes and edges. The nodes can have
labels that can be visualized by wandb.
Note : When logging a torch.Tensor as a wandb.Image, images are normalized. If you do not want to normalize your images, please convert your tensors to a PIL Image.
Examples:
Create a wandb.Image from a numpy array
import numpy as np
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Create a wandb.Image from a PILImage
import numpy as np
from PIL import Image as PILImage
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(
low=0, high=256, size=(100, 100, 3), dtype=np.uint8
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
log .jpg rather than .png (default)
import numpy as np
import wandb
with wandb.init() as run:
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}", file_type="jpg")
examples.append(image)
run.log({"examples": examples})
Format image masks or overlays for logging to W&B.
ImageMask(
val: dict,
key: str
) ->None
Args
val
(dictionary) One of these two keys to represent the image: mask_data : (2D numpy array) The mask containing an integer class label for each pixel in the image path : (string) The path to a saved image file of the mask class_labels : (dictionary of integers to strings, optional) A mapping of the integer class labels in the mask to readable class names. These will default to class_0, class_1, class_2, etc.
key
(string) The readable name or id for this mask type (e.g. predictions, ground_truth)
(numpy array, string, io) Object3D can be initialized from a file or a numpy array. You can pass a path to a file or an io object and a file_type which must be one of SUPPORTED_TYPES
The shape of the numpy array must be one of either:
[[x y z], ...] nx3
[[x y z c], ...] nx4 where c is a category with supported range [1, 14]
[[x y z r g b], ...] nx6 where is rgb is color
data_or_path (Union[“TextIO”, str]): A path to a file or a TextIO stream. file_type (str): Specifies the data format passed to data_or_path. Required when data_or_path is a TextIO stream. This parameter is ignored if a file path is provided. The type is taken from the file extension.
points (Sequence[“Point”]): The points in the point cloud. boxes (Sequence[“Box3D”]): 3D bounding boxes for labeling the point cloud. Boxes are displayed in point cloud visualizations. vectors (Optional[Sequence[“Vector3D”]]): Each vector is displayed in the point cloud visualization. Can be used to indicate directionality of bounding boxes. Defaults to None. point_cloud_type (“lidar/beta”): At this time, only the “lidar/beta” type is supported. Defaults to “lidar/beta”.
Unlike traditional spreadsheets, Tables support numerous types of data:
scalar values, strings, numpy arrays, and most subclasses of wandb.data_types.Media.
This means you can embed Images, Video, Audio, and other sorts of rich, annotated media
directly in Tables, alongside other traditional scalar values.
(List[str]) Names of the columns in the table. Defaults to [“Input”, “Output”, “Expected”].
data
(List[List[any]]) 2D row-oriented array of values.
dataframe
(pandas.DataFrame) DataFrame object used to create the table. When set, data and columns arguments are ignored.
optional
(Union[bool,List[bool]]) Determines if None values are allowed. Default to True - If a singular bool value, then the optionality is enforced for all columns specified at construction time - If a list of bool values, then the optionality is applied to each column - should be the same length as columns applies to all columns. A list of bool values applies to each respective column.
allow_mixed_types
(bool) Determines if columns are allowed to have mixed types (disables type validation). Defaults to False
Adds one or more computed columns based on existing data.
Args
fn
A function which accepts one or two parameters, ndx (int) and row (dict), which is expected to return a dict representing new columns for that row, keyed by the new column names. ndx is an integer representing the index of the row. Only included if include_ndx is set to True. row is a dictionary keyed by existing columns
Returns the table data by row, showing the index of the row and the relevant data.
Yields
index : int
The index of the row. Using this value in other W&B tables
will automatically build a relationship between the tables
row : List[any]
The data of the row.
(numpy array, string, io) Video can be initialized with a path to a file or an io object. The format must be “gif”, “mp4”, “webm” or “ogg”. The format must be specified with the format argument. Video can be initialized with a numpy tensor. The numpy tensor must be either 4 dimensional or 5 dimensional. Channels should be (time, channel, height, width) or (batch, time, channel, height width)
caption
(string) caption associated with the video for display
fps
(int) The frame rate to use when encoding raw video frames. Default value is 4. This parameter has no effect when data_or_path is a string, or bytes.
format
(string) format of video, necessary if initializing with path or io object.
Examples:
Log a numpy array as a video
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)frames = np.random.randint(low=0, high=256, size=(10, 3, 100, 100), dtype=np.uint8)
run.log({"video": wandb.Video(frames, fps=4)})
root_span (Span): The root span of the trace tree. model_dict (dict, optional): A dictionary containing the model dump. NOTE: model_dict is a completely-user-defined dict. The UI will render a JSON viewer for this dict, giving special treatment to dictionaries with a _kind key. This is because model vendors have such different serialization formats that we need to be flexible here.
Marks the completion of a W&B run and ensures all data is synced to the server.
The run’s final state is determined by its exit conditions and sync status.
Run States:
Running: Active run that is logging data and/or sending heartbeats.
Crashed: Run that stopped sending heartbeats unexpectedly.
Finished: Run completed successfully (exit_code=0) with all data synced.
Failed: Run completed with errors (exit_code!=0).
Args
exit_code
Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed.
quiet
Deprecated. Configure logging verbosity using wandb.Settings(quiet=...).
class Projects: An iterable collection of Project objects.
class QueuedRun: A single queued run associated with an entity and project. Call run = queued_run.wait_until_running() or run = queued_run.wait_until_finished() to access the run.
class Run: A single run associated with an entity and project.
Return a single artifact by parsing path in the form project/name or entity/project/name.
Args
name
(str) An artifact name. May be prefixed with project/ or entity/project/. If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms: name:version name:alias
type
(str, optional) The type of artifact to fetch.
Returns
An Artifact object.
Raises
ValueError
If the artifact name is not specified.
ValueError
If the artifact type is specified but does not match the type of the fetched artifact.
Note:
This method is intended for external use only. Do not call api.artifact() within the wandb repository code.
Return whether an artifact collection exists within a specified project and entity.
Args
name
(str) An artifact collection name. May be prefixed with entity/project. If entity or project is not specified, it will be inferred from the override params if populated. Otherwise, entity will be pulled from the user settings and project will default to “uncategorized”.
type
(str) The type of artifact collection
Returns
True if the artifact collection exists, False otherwise.
Return whether an artifact version exists within a specified project and entity.
Args
name
(str) An artifact name. May be prefixed with entity/project. If entity or project is not specified, it will be inferred from the override params if populated. Otherwise, entity will be pulled from the user settings and project will default to “uncategorized”. Valid names can be in the following forms: name:version name:alias
type
(str, optional) The type of artifact
Returns
True if the artifact version exists, False otherwise.
(str, optional) The ID to assign to the run, if given. The run ID is automatically generated by default, so in general, you do not need to specify this and should only do so at your own risk.
project
(str, optional) If given, the project of the new run.
entity
(str, optional) If given, the entity of the new run.
(str) Type of resource to be used for the queue. One of “local-container”, “local-process”, “kubernetes”, “sagemaker”, or “gcp-vertex”.
entity
(str) Optional name of the entity to create the queue. If None, will use the configured or default entity.
prioritization_mode
(str) Optional version of prioritization to use. Either “V0” or None
config
(dict) Optional default resource configuration to be used for the queue. Use handlebars (eg. {{var}}) to specify template variables.
template_variables
(dict) A dictionary of template variable schemas to be used with the config. Expected format of: { "var-name": { "schema": { "type": ("string", "number", or "integer"), "default": (optional value), "minimum": (optional minimum), "maximum": (optional maximum), "enum": [..."(options)"] } } }
Returns
The newly created RunQueue
Raises
ValueError if any of the parameters are invalid wandb.Error on wandb API errors
The api object keeps a local cache of runs, so if the state of the run may
change while executing your script you must clear the local cache with
api.flush() to get the latest values associated with the run.
integrations(
entity: Optional[str] =None,
*,
per_page: int =50) -> Iterator['Integration']
Return an iterator of all integrations for an entity.
Args
entity (str, optional): The entity (e.g. team name) for which to fetch integrations. If not provided, the user’s default entity will be used. per_page (int, optional): Number of integrations to fetch per page. Defaults to 50.
Yields
Iterator[SlackIntegration
WebhookIntegration]: An iterator of any supported integrations.
(str, optional) The organization of the registry to fetch. If not specified, use the organization specified in the user’s settings.
filter
(dict, optional) MongoDB-style filter to apply to each object in the registry iterator. Fields available to filter for collections are name, description, created_at, updated_at. Fields available to filter for collections are name, tag, description, created_at, updated_at Fields available to filter for versions are tag, alias, created_at, updated_at, metadata
Return a single run by parsing path in the form entity/project/run_id.
Args
path
(str) path to run in the form entity/project/run_id. If api.entity is set, this can be in the form project/run_id and if api.project is set this can just be the run_id.
(str) path to project, should be in the form: “entity/project”
filters
(dict) queries for specific runs using the MongoDB query language. You can filter by run properties such as config.key, summary_metrics.key, state, entity, createdAt, etc. For example: {"config.experiment_name": "foo"} would find runs with a config entry of experiment name set to “foo”
order
(str) Order can be created_at, heartbeat_at, config.*.value, or summary_metrics.*. If you prepend order with a + order is ascending. If you prepend order with a - order is descending (default). The default order is run.created_at from oldest to newest.
per_page
(int) Sets the page size for query pagination.
include_sweeps
(bool) Whether to include the sweep runs in the results.
Returns
A Runs object, which is an iterable collection of Run objects.
slack_integrations(
entity: Optional[str] =None,
*,
per_page: int =50) -> Iterator['SlackIntegration']
Return an iterator of Slack integrations for an entity.
Args
entity (str, optional): The entity (e.g. team name) for which to fetch integrations. If not provided, the user’s default entity will be used. per_page (int, optional): Number of integrations to fetch per page. Defaults to 50.
Yields
Iterator[SlackIntegration]: An iterator of Slack integrations.
Examples:
Get all registered Slack integrations for the team “my-team”:
import wandb
api = wandb.Api()
slack_integrations = api.slack_integrations(entity="my-team")
Find only Slack integrations that post to channel names starting with “team-alerts-”:
slack_integrations = api.slack_integrations(entity="my-team")
team_alert_integrations = [
ig
for ig in slack_integrations
if ig.channel_name.startswith("team-alerts-")
]
Return a sweep by parsing path in the form entity/project/sweep_id.
Args
path
(str, optional) path to sweep in the form entity/project/sweep_id. If api.entity is set, this can be in the form project/sweep_id and if api.project is set this can just be the sweep_id.
(str) Optional name of the entity to create the queue. If None, will use the configured or default entity.
resource_config
(dict) Optional default resource configuration to be used for the queue. Use handlebars (eg. {{var}}) to specify template variables.
resource_type
(str) Type of resource to be used for the queue. One of “local-container”, “local-process”, “kubernetes”, “sagemaker”, or “gcp-vertex”.
template_variables
(dict) A dictionary of template variable schemas to be used with the config. Expected format of: { "var-name": { "schema": { "type": ("string", "number", or "integer"), "default": (optional value), "minimum": (optional minimum), "maximum": (optional maximum), "enum": [..."(options)"] } } }
external_links
(dict) Optional dictionary of external links to be used with the queue. Expected format of: { "name": "url" }
prioritization_mode
(str) Optional version of prioritization to use. Either “V0” or None
Returns
The upserted RunQueue.
Raises
ValueError if any of the parameters are invalid wandb.Error on wandb API errors
webhook_integrations(
entity: Optional[str] =None,
*,
per_page: int =50) -> Iterator['WebhookIntegration']
Return an iterator of webhook integrations for an entity.
Args
entity (str, optional): The entity (e.g. team name) for which to fetch integrations. If not provided, the user’s default entity will be used. per_page (int, optional): Number of integrations to fetch per page. Defaults to 50.
Yields
Iterator[WebhookIntegration]: An iterator of webhook integrations.
Examples:
Get all registered webhook integrations for the team “my-team”:
import wandb
api = wandb.Api()
webhook_integrations = api.webhook_integrations(entity="my-team")
Find only webhook integrations that post requests to https://my-fake-url.com:
webhook_integrations = api.webhook_integrations(entity="my-team")
my_webhooks = [
ig
for ig in webhook_integrations
if ig.url_endpoint.startswith("https://my-fake-url.com")
]
Downloads a file previously saved by a run from the wandb server.
Args
replace (boolean): If True, download will overwrite a local file if it exists. Defaults to False. root (str): Local directory to save the file. Defaults to “.”. exist_ok (boolean): If True, will not raise ValueError if file already exists and will not re-download unless replace=True. Defaults to False. api (Api, optional): If given, the Api instance used to download the file.
Raises
ValueError if file already exists, replace=False and exist_ok=False.
A single queued run associated with an entity and project. Call run = queued_run.wait_until_running() or run = queued_run.wait_until_finished() to access the run.
Returns an iterable collection of all history records for a run.
Example:
Export all the loss values for an example run
run = api.run("l2k2/examples-numpy-boston/i0wt6xua")
history = run.scan_history(keys=["Loss"])
losses = [row["Loss"] for row in history]
Args
keys ([str], optional): only fetch these keys, and only fetch rows that have all of keys defined. page_size (int, optional): size of pages to fetch from the api. min_step (int, optional): the minimum number of pages to scan at a time. max_step (int, optional): the maximum number of pages to scan at a time.
Returns
An iterable collection over history records (dict).
path (str): name of file to upload. root (str): the root path to save the file relative to. i.e. If you want to have the file saved in the run as “my_dir/file.txt” and you’re currently in “my_dir” you would set root to “../”.
artifact (Artifact): An artifact returned from wandb.Api().artifact(name) use_as (string, optional): A string identifying how the artifact is used in the script. Used to easily differentiate artifacts used in a run, when using the beta wandb launch feature’s artifact swapping functionality.
used_artifacts(
per_page: int =100) -> public.RunArtifacts
Fetches artifacts explicitly used by this run.
Retrieves only the input artifacts that were explicitly declared as used
during the run, typically via run.use_artifact(). Returns a paginated
result that can be iterated over or collected into a single list.
Args
per_page
Number of artifacts to fetch per API request.
Returns
An iterable collection of Artifact objects explicitly used as inputs in this run.
Example:
>>> import wandb
>>> run = wandb.init(project="artifact-example")
>>> run.use_artifact("test_artifact:latest")
>>> run.finish()
>>> api = wandb.Api()
>>> finished_run = api.run(f"{run.entity}/{run.project}/{run.id}")
>>> for used_artifact in finished_run.used_artifacts():
... print(used_artifact.name)
test_artifact
Return sampled history metrics for all runs that fit the filters conditions.
Args
samples
(int, optional) The number of samples to return per run
keys
(list[str], optional) Only return metrics for specific keys
x_axis
(str, optional) Use this metric as the xAxis defaults to _step
format
(Literal, optional) Format to return data in, options are “default”, “pandas”, “polars”
stream
(Literal, optional) “default” for metrics, “system” for machine metrics
Returns
pandas.DataFrame
If format=“pandas”, returns a pandas.DataFrame of history metrics.
polars.DataFrame
If format=“polars”, returns a polars.DataFrame of history metrics. list of dicts: If format=“default”, returns a list of dicts containing history metrics with a run_id key.
In an ML training pipeline, you could add wandb.init() to the beginning of
your training script as well as your evaluation script, and each piece would
be tracked as a run in W&B.
wandb.init() spawns a new background process to log data to a run, and it
also syncs data to https://wandb.ai by default, so you can see your results
in real-time.
Call wandb.init() to start a run before logging data with wandb.log().
When you’re done logging data, call wandb.finish() to end the run. If you
don’t call wandb.finish(), the run will end when your script exits.
For more on using wandb.init(), including detailed examples, check out our
guide and FAQs.
Examples:
Explicitly set the entity and project and choose a name for the run:
import wandb
run = wandb.init(
entity="geoff",
project="capsules",
name="experiment-2021-10-31",
)
# ... your training code here ...run.finish()
Add metadata about the run using the config argument:
import wandb
config = {"lr": 0.01, "batch_size": 32}
with wandb.init(config=config) as run:
run.config.update({"architecture": "resnet", "depth": 34})
# ... your training code here ...
Note that you can use wandb.init() as a context manager to automatically
call wandb.finish() at the end of the block.
Args
entity
The username or team name under which the runs will be logged. The entity must already exist, so ensure you’ve created your account or team in the UI before starting to log runs. If not specified, the run will default your default entity. To change the default entity, go to your settings and update the “Default location to create new projects” under “Default team”.
project
The name of the project under which this run will be logged. If not specified, we use a heuristic to infer the project name based on the system, such as checking the git root or the current program file. If we can’t infer the project name, the project will default to "uncategorized".
dir
The absolute path to the directory where experiment logs and metadata files are stored. If not specified, this defaults to the ./wandb directory. Note that this does not affect the location where artifacts are stored when calling download().
id
A unique identifier for this run, used for resuming. It must be unique within the project and cannot be reused once a run is deleted. The identifier must not contain any of the following special characters: / \ # ? % :. For a short descriptive name, use the name field, or for saving hyperparameters to compare across runs, use config.
name
A short display name for this run, which appears in the UI to help you identify it. By default, we generate a random two-word name allowing easy cross-reference runs from table to charts. Keeping these run names brief enhances readability in chart legends and tables. For saving hyperparameters, we recommend using the config field.
notes
A detailed description of the run, similar to a commit message in Git. Use this argument to capture any context or details that may help you recall the purpose or setup of this run in the future.
tags
A list of tags to label this run in the UI. Tags are helpful for organizing runs or adding temporary identifiers like “baseline” or “production.” You can easily add, remove tags, or filter by tags in the UI. If resuming a run, the tags provided here will replace any existing tags. To add tags to a resumed run without overwriting the current tags, use run.tags += ["new_tag"] after calling run = wandb.init().
config
Sets wandb.config, a dictionary-like object for storing input parameters to your run, such as model hyperparameters or data preprocessing settings. The config appears in the UI in an overview page, allowing you to group, filter, and sort runs based on these parameters. Keys should not contain periods (.), and values should be smaller than 10 MB. If a dictionary, argparse.Namespace, or absl.flags.FLAGS is provided, the key-value pairs will be loaded directly into wandb.config. If a string is provided, it is interpreted as a path to a YAML file, from which configuration values will be loaded into wandb.config.
config_exclude_keys
A list of specific keys to exclude from wandb.config.
config_include_keys
A list of specific keys to include in wandb.config.
allow_val_change
Controls whether config values can be modified after their initial set. By default, an exception is raised if a config value is overwritten. For tracking variables that change during training, such as a learning rate, consider using wandb.log() instead. By default, this is False in scripts and True in Notebook environments.
group
Specify a group name to organize individual runs as part of a larger experiment. This is useful for cases like cross-validation or running multiple jobs that train and evaluate a model on different test sets. Grouping allows you to manage related runs collectively in the UI, making it easy to toggle and review results as a unified experiment. For more information, refer to our guide to grouping runs.
job_type
Specify the type of run, especially helpful when organizing runs within a group as part of a larger experiment. For example, in a group, you might label runs with job types such as “train” and “eval”. Defining job types enables you to easily filter and group similar runs in the UI, facilitating direct comparisons.
mode
Specifies how run data is managed, with the following options: - "online" (default): Enables live syncing with W&B when a network connection is available, with real-time updates to visualizations. - "offline": Suitable for air-gapped or offline environments; data is saved locally and can be synced later. Ensure the run folder is preserved to enable future syncing. - "disabled": Disables all W&B functionality, making the run’s methods no-ops. Typically used in testing to bypass W&B operations.
force
Determines if a W&B login is required to run the script. If True, the user must be logged in to W&B; otherwise, the script will not proceed. If False (default), the script can proceed without a login, switching to offline mode if the user is not logged in.
anonymous
Specifies the level of control over anonymous data logging. Available options are: - "never" (default): Requires you to link your W&B account before tracking the run. This prevents unintentional creation of anonymous runs by ensuring each run is associated with an account. - "allow": Enables a logged-in user to track runs with their account, but also allows someone running the script without a W&B account to view the charts and data in the UI. - "must": Forces the run to be logged to an anonymous account, even if the user is logged in.
reinit
Shorthand for the “reinit” setting. Determines the behavior of wandb.init() when a run is active.
resume
Controls the behavior when resuming a run with the specified id. Available options are: - "allow": If a run with the specified id exists, it will resume from the last step; otherwise, a new run will be created. - "never": If a run with the specified id exists, an error will be raised. If no such run is found, a new run will be created. - "must": If a run with the specified id exists, it will resume from the last step. If no run is found, an error will be raised. - "auto": Automatically resumes the previous run if it crashed on this machine; otherwise, starts a new run. - True: Deprecated. Use "auto" instead. - False: Deprecated. Use the default behavior (leaving resume unset) to always start a new run. Note: If resume is set, fork_from and resume_from cannot be used. When resume is unset, the system will always start a new run. For more details, see our guide to resuming runs.
resume_from
Specifies a moment in a previous run to resume a run from, using the format {run_id}?_step={step}. This allows users to truncate the history logged to a run at an intermediate step and resume logging from that step. The target run must be in the same project. If an id argument is also provided, the resume_from argument will take precedence. resume, resume_from and fork_from cannot be used together, only one of them can be used at a time. Note: This feature is in beta and may change in the future.
fork_from
Specifies a point in a previous run from which to fork a new run, using the format {id}?_step={step}. This creates a new run that resumes logging from the specified step in the target run’s history. The target run must be part of the current project. If an id argument is also provided, it must be different from the fork_from argument, an error will be raised if they are the same. resume, resume_from and fork_from cannot be used together, only one of them can be used at a time. Note: This feature is in beta and may change in the future.
save_code
Enables saving the main script or notebook to W&B, aiding in experiment reproducibility and allowing code comparisons across runs in the UI. By default, this is disabled, but you can change the default to enable on your settings page.
tensorboard
Deprecated. Use sync_tensorboard instead.
sync_tensorboard
Enables automatic syncing of W&B logs from TensorBoard or TensorBoardX, saving relevant event files for viewing in the W&B UI. saving relevant event files for viewing in the W&B UI. (Default: False)
monitor_gym
Enables automatic logging of videos of the environment when using OpenAI Gym. For additional details, see our guide for gym integration.
settings
Specifies a dictionary or wandb.Settings object with advanced settings for the run.
Returns
A Run object, which is a handle to the current run. Use this object to perform operations like logging data, saving files, and finishing the run. See the Run API for more details.
Raises
Error
If some unknown or internal error happened during the run initialization.
AuthenticationError
If the user failed to provide valid credentials.
CommError
If there was a problem communicating with the W&B server.
UsageError
If the user provided invalid arguments to the function.
KeyboardInterrupt
If the user interrupts the run initialization process. If the user interrupts the run initialization process.
4.8 - Integrations
Modules
keras module: Tools for integrating wandb with Keras.
WandbCallback will automatically log history data from any
metrics collected by keras: loss and anything passed into keras_model.compile().
WandbCallback will set summary metrics for the run associated with the “best” training
step, where “best” is defined by the monitor and mode attributes. This defaults
to the epoch with the minimum val_loss. WandbCallback will by default save the model
associated with the best epoch.
WandbCallback can optionally log gradient and parameter histograms.
WandbCallback can optionally save training and validation data for wandb to visualize.
Args
monitor
(str) name of metric to monitor. Defaults to val_loss.
mode
(str) one of {auto, min, max}. min - save model when monitor is minimized max - save model when monitor is maximized auto - try to guess when to save the model (default).
save_model
True - save a model when monitor beats all previous epochs False - don’t save models
save_graph
(boolean) if True save model graph to wandb (default to True).
save_weights_only
(boolean) if True, then only the model’s weights will be saved (model.save_weights(filepath)), else the full model is saved (model.save(filepath)).
log_weights
(boolean) if True save histograms of the model’s layer’s weights.
log_gradients
(boolean) if True log histograms of the training gradients
training_data
(tuple) Same format (X,y) as passed to model.fit. This is needed for calculating gradients - this is mandatory if log_gradients is True.
validation_data
(tuple) Same format (X,y) as passed to model.fit. A set of data for wandb to visualize. If this is set, every epoch, wandb will make a small number of predictions and save the results for later visualization. In case you are working with image data, please also set input_type and output_type in order to log correctly.
generator
(generator) a generator that returns validation data for wandb to visualize. This generator should return tuples (X,y). Either validate_data or generator should be set for wandb to visualize specific data examples. In case you are working with image data, please also set input_type and output_type in order to log correctly.
validation_steps
(int) if validation_data is a generator, how many steps to run the generator for the full validation set.
labels
(list) If you are visualizing your data with wandb this list of labels will convert numeric output to understandable string if you are building a multiclass classifier. If you are making a binary classifier you can pass in a list of two labels [“label for false”, “label for true”]. If validate_data and generator are both false, this won’t do anything.
predictions
(int) the number of predictions to make for visualization each epoch, max is 100.
input_type
(string) type of the model input to help visualization. can be one of: (image, images, segmentation_mask, auto).
output_type
(string) type of the model output to help visualization. can be one of: (image, images, segmentation_mask, label).
log_evaluation
(boolean) if True, save a Table containing validation data and the model’s predictions at each epoch. See validation_indexes, validation_row_processor, and output_row_processor for additional details.
class_colors
([float, float, float]) if the input or output is a segmentation mask, an array containing an rgb tuple (range 0-1) for each class.
log_batch_frequency
(integer) if None, callback will log every epoch. If set to integer, callback will log training metrics every log_batch_frequency batches.
log_best_prefix
(string) if None, no extra summary metrics will be saved. If set to a string, the monitored metric and epoch will be prepended with this value and stored as summary metrics.
validation_indexes
([wandb.data_types._TableLinkMixin]) an ordered list of index keys to associate with each validation example. If log_evaluation is True and validation_indexes is provided, then a Table of validation data will not be created and instead each prediction will be associated with the row represented by the TableLinkMixin. The most common way to obtain such keys are is use Table.get_index() which will return a list of row keys.
validation_row_processor
(Callable) a function to apply to the validation data, commonly used to visualize the data. The function will receive an ndx (int) and a row (dict). If your model has a single input, then row["input"] will be the input data for the row. Else, it will be keyed based on the name of the input slot. If your fit function takes a single target, then row["target"] will be the target data for the row. Else, it will be keyed based on the name of the output slots. For example, if your input data is a single ndarray, but you wish to visualize the data as an Image, then you can provide lambda ndx, row: {"img": wandb.Image(row["input"])} as the processor. Ignored if log_evaluation is False or validation_indexes are present.
output_row_processor
(Callable) same as validation_row_processor, but applied to the model’s output. row["output"] will contain the results of the model output.
infer_missing_processors
(bool) Determines if validation_row_processor and output_row_processor should be inferred if missing. Defaults to True. If labels are provided, we will attempt to infer classification-type processors where appropriate.
log_evaluation_frequency
(int) Determines the frequency which evaluation results will be logged. Default 0 (only at the end of training). Set to 1 to log every epoch, 2 to log every other epoch, and so on. Has no effect when log_evaluation is False.
compute_flops
(bool) Compute the FLOPs of your Keras Sequential or Functional model in GigaFLOPs unit.
You can build callbacks for visualizing model predictions on_epoch_end
that can be passed to model.fit() for classification, object detection,
segmentation, etc. tasks.
To use this, inherit from this base callback class and implement the
add_ground_truth and add_model_prediction methods.
The base class will take care of the following:
Initialize data_table for logging the ground truth and
pred_table for predictions.
The data uploaded to data_table is used as a reference for the
pred_table. This is to reduce the memory footprint. The data_table_ref
is a list that can be used to access the referenced data.
Check out the example below to see how it’s done.
Log the tables to W&B as W&B Artifacts.
Each new pred_table is logged as a new version with aliases.
To have more fine-grained control, you can override the on_train_begin and
on_epoch_end methods. If you want to log the samples after N batched, you
can implement on_train_batch_end method.
Use this method to write the logic for adding model prediction for validation/
training data to pred_table initialized using init_pred_table method.
Example:
# Assuming the dataloader is not shuffling the samples.for idx, data in enumerate(dataloader):
preds = model.predict(data)
self.pred_table.add_data(
self.data_table_ref.data[idx][0],
self.data_table_ref.data[idx][1],
preds,
)
This method is called on_epoch_end or equivalent hook.
Log the data_table as W&B artifact and call use_artifact on it.
This lets the evaluation table use the reference of already uploaded data
(images, text, scalar, etc.) without re-uploading.
Args
name
(str) A human-readable name for this artifact, which is how you can identify this artifact in the UI or reference it in use_artifact calls. (default is ‘val’)
type
(str) The type of the artifact, which is used to organize and differentiate artifacts. (default is ‘dataset’)
table_name
(str) The name of the table as will be displayed in the UI. (default is ‘val_data’).
WandbMetricsLogger automatically logs the logs dictionary that callback methods
take as argument to wandb.
This callback automatically logs the following to a W&B run page:
system (CPU/GPU/TPU) metrics,
train and validation metrics defined in model.compile,
learning rate (both for a fixed value or a learning rate scheduler)
Notes:
If you resume training by passing initial_epoch to model.fit and you are using a
learning rate scheduler, make sure to pass initial_global_step to
WandbMetricsLogger. The initial_global_step is step_size * initial_step, where
step_size is number of training steps per epoch. step_size can be calculated as
the product of the cardinality of the training dataset and the batch size.
Args
log_freq
(“epoch”, “batch”, or int) if “epoch”, logs metrics at the end of each epoch. If “batch”, logs metrics at the end of each batch. If an integer, logs metrics at the end of that many batches. Defaults to “epoch”.
initial_global_step
(int) Use this argument to correctly log the learning rate when you resume training from some initial_epoch, and a learning rate scheduler is used. This can be computed as step_size * initial_step. Defaults to 0.
This callback is to be used in conjunction with training using model.fit() to save
a model or weights (in a checkpoint file) at some interval. The model checkpoints
will be logged as W&B Artifacts. You can learn more here:
https://docs.wandb.ai/guides/artifacts
This callback provides the following features:
Save the model that has achieved “best performance” based on “monitor”.
Save the model at the end of every epoch regardless of the performance.
Save the model at the end of epoch or after a fixed number of training batches.
Save only model weights, or save the whole model.
Save the model either in SavedModel format or in .h5 format.
Args
filepath
(Union[str, os.PathLike]) path to save the model file. filepath can contain named formatting options, which will be filled by the value of epoch and keys in logs (passed in on_epoch_end). For example: if filepath is model-{epoch:02d}-{val_loss:.2f}, then the model checkpoints will be saved with the epoch number and the validation loss in the filename.
monitor
(str) The metric name to monitor. Default to “val_loss”.
verbose
(int) Verbosity mode, 0 or 1. Mode 0 is silent, and mode 1 displays messages when the callback takes an action.
save_best_only
(bool) if save_best_only=True, it only saves when the model is considered the “best” and the latest best model according to the quantity monitored will not be overwritten. If filepath doesn’t contain formatting options like {epoch} then filepath will be overwritten by each new better model locally. The model logged as an artifact will still be associated with the correct monitor. Artifacts will be uploaded continuously and versioned separately as a new best model is found.
save_weights_only
(bool) if True, then only the model’s weights will be saved.
mode
(Mode) one of {‘auto’, ‘min’, ‘max’}. For val_acc, this should be max, for val_loss this should be min, etc.
save_freq
(Union[SaveStrategy, int]) epoch or integer. When using 'epoch', the callback saves the model after each epoch. When using an integer, the callback saves the model at end of this many batches. Note that when monitoring validation metrics such as val_acc or val_loss, save_freq must be set to “epoch” as those metrics are only available at the end of an epoch.
initial_value_threshold
(Optional[float]) Floating point initial “best” value of the metric to be monitored.
Attributes
Methods
set_model
set_model(
model
)
set_params
set_params(
params
)
4.9 - launch-library
Classes
class LaunchAgent: Launch agent class which polls run given run queues and launches runs for wandb launch.
string reference to a wandb.Job eg: wandb/test/my-job:latest
api
An instance of a wandb Api from wandb.apis.internal.
entry_point
Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs.
version
For Git-based projects, either a commit hash or a branch name.
name
Name run under which to launch the run.
resource
Execution backend for the run.
resource_args
Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config under resource_args.
project
Target project to send launched run to
entity
Target entity to send launched run to
config
A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args”.
synchronous
Whether to block while waiting for a run to complete. Defaults to True. Note that if synchronous is False and backend is “local-container”, this method will return, but the current process will block when exiting until the local run completes. If the current process is interrupted, any asynchronous runs launched via this method will be terminated. If synchronous is True and the run fails, the current process will error out as well.
run_id
ID for the run (To ultimately replace the :name: field)
repository
string name of repository path for remote registry
Example:
from wandb.sdk.launch import launch
job ="wandb/jobs/Hello World:latest"params = {"epochs": 5}
# Run W&B project and create a reproducible docker environment# on a local hostapi = wandb.apis.internal.Api()
launch(api, job, parameters=params)
Returns
an instance ofwandb.launch.SubmittedRun exposing information (e.g. run ID) about the launched run.
Raises
wandb.exceptions.ExecutionError If a run launched in blocking mode is unsuccessful.
URI of experiment to run. A wandb run uri or a Git repository URI.
job
string reference to a wandb.Job eg: wandb/test/my-job:latest
config
A dictionary containing the configuration for the run. May also contain resource specific arguments under the key “resource_args”
template_variables
A dictionary containing values of template variables for a run queue. Expected format of {"VAR_NAME": VAR_VALUE}
project
Target project to send launched run to
entity
Target entity to send launched run to
queue
the name of the queue to enqueue the run to
priority
the priority level of the job, where 1 is the highest priority
resource
Execution backend for the run: W&B provides built-in support for “local-container” backend
entry_point
Entry point to run within the project. Defaults to using the entry point used in the original run for wandb URIs, or main.py for git repository URIs.
name
Name run under which to launch the run.
version
For Git-based projects, either a commit hash or a branch name.
docker_image
The name of the docker image to use for the run.
resource_args
Resource related arguments for launching runs onto a remote backend. Will be stored on the constructed launch config under resource_args.
run_id
optional string indicating the id of the launched run
build
optional flag defaulting to false, requires queue to be set if build, an image is created, creates a job artifact, pushes a reference to that job artifact to queue
repository
optional string to control the name of the remote repository, used when pushing images to a registry
project_queue
optional string to control the name of the project for the queue. Primarily used for back compatibility with project scoped queues
Example:
from wandb.sdk.launch import launch_add
project_uri ="https://github.com/wandb/examples"params = {"alpha": 0.5, "l1_ratio": 0.01}
# Run W&B project and create a reproducible docker environment# on a local hostapi = wandb.apis.internal.Api()
launch_add(uri=project_uri, parameters=params)
Returns
an instance ofwandb.api.public.QueuedRun which gives information about the queued run, or if wait_until_started or wait_until_finished are called, gives access to the underlying Run information.
Use log to log data from runs, such as scalars, images, video,
histograms, plots, and tables.
See our guides to logging for
live examples, code snippets, best practices, and more.
The most basic usage is run.log({"train-loss": 0.5, "accuracy": 0.9}).
This will save the loss and accuracy to the run’s history and update
the summary values for these metrics.
Visualize logged data in the workspace at wandb.ai,
or locally on a self-hosted instance
of the W&B app, or export data to visualize and explore locally, e.g. in
Jupyter notebooks, with our API.
Logged values don’t have to be scalars. Logging any wandb object is supported.
For example run.log({"example": wandb.Image("myimage.jpg")}) will log an
example image which will be displayed nicely in the W&B UI.
See the reference documentation
for all of the different supported types or check out our
guides to logging for examples,
from 3D molecular structures and segmentation masks to PR curves and histograms.
You can use wandb.Table to log structured data. See our
guide to logging tables
for details.
The W&B UI organizes metrics with a forward slash (/) in their name
into sections named using the text before the final slash. For example,
the following results in two sections named “train” and “validate”:
Only one level of nesting is supported; run.log({"a/b/c": 1})
produces a section named “a/b”.
run.log is not intended to be called more than a few times per second.
For optimal performance, limit your logging to once every N iterations,
or collect data over multiple iterations and log it in a single step.
The W&B step
With basic usage, each call to log creates a new “step”.
The step must always increase, and it is not possible to log
to a previous step.
Note that you can use any metric as the X axis in charts.
In many cases, it is better to treat the W&B step like
you’d treat a timestamp rather than a training step.
# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})
A dict with str keys and values that are serializable Python objects including: int, float and string; any of the wandb.data_types; lists, tuples and NumPy arrays of serializable Python objects; other dicts of this structure.
step
The step number to log. If None, then an implicit auto-incrementing step is used. See the notes in the description.
commit
If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. If step is None, then the default is commit=True; otherwise, the default is commit=False.
import wandb
run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
Incremental logging
import wandb
run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:run.log({"accuracy": 0.8})
Histogram
import numpy as np
import wandb
# sample gradients at random from normal distributiongradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
Image from numpy
import numpy as np
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(
low=0,
high=256,
size=(100, 100, 3),
dtype=np.uint8,
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Video from numpy
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)frames = np.random.randint(
low=0,
high=256,
size=(10, 3, 100, 100),
dtype=np.uint8,
)
run.log({"video": wandb.Video(frames, fps=4)})
Matplotlib Plot
from matplotlib import pyplot as plt
import numpy as np
import wandb
run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y) # plot y = x^2run.log({"chart": fig})
PR Curve
import wandb
run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
By default, this will only store credentials locally without
verifying them with the W&B server. To verify credentials, pass
verify=True.
Args
anonymous
(string, optional) Can be “must”, “allow”, or “never”. If set to “must”, always log a user in anonymously. If set to “allow”, only create an anonymous user if the user isn’t already logged in. If set to “never”, never log a user anonymously. Default set to “never”.
key
(string, optional) The API key to use.
relogin
(bool, optional) If true, will re-prompt for API key.
host
(string, optional) The host to connect to.
force
(bool, optional) If true, will force a relogin.
timeout
(int, optional) Number of seconds to wait for user input.
verify
(bool) Verify the credentials with the W&B server.
referrer
(string, optional) The referrer to use in the URL login request.
Returns
bool
if key is configured
Raises
AuthenticationError - if api_key fails verification with the server UsageError - if api_key cannot be configured and no tty
anything you log with wandb.log will be sent to that run.
If you want to start more runs in the same script or notebook, you’ll need to
finish the run that is in-flight. Runs can be finished with wandb.finish or
by using them in a with block:
import wandb
wandb.init()
wandb.finish()
assert wandb.run isNonewith wandb.init() as run:
pass# log data hereassert wandb.run isNone
See the documentation for wandb.init for more on creating runs, or check out
our guide to wandb.init.
In distributed training, you can either create a single run in the rank 0 process
and then log information only from that process, or you can create a run in each process,
logging from each separately, and group the results together with the group argument
to wandb.init. For more details on distributed training with W&B, check out
our guide.
Currently, there is a parallel Run object in the wandb.Api. Eventually these
two objects will be merged.
Attributes
summary
(Summary) Single values set for each wandb.log() key. By default, summary is set to the last value logged. You can manually set summary to the best value, like max accuracy, instead of the final value.
config
Config object associated with this run.
dir
The directory where files associated with the run are saved.
entity
The name of the W&B entity associated with the run. Entity can be a username or the name of a team or organization.
group
Name of the group associated with the run. Setting a group helps the W&B UI organize runs in a sensible way. If you are doing a distributed training you should give all of the runs in the training the same group. If you are doing cross-validation you should give all the cross-validation folds the same group.
id
Identifier for this run.
mode
For compatibility with 0.9.x and earlier, deprecate eventually.
name
Display name of the run. Display names are not guaranteed to be unique and may be descriptive. By default, they are randomly generated.
notes
Notes associated with the run, if there are any. Notes can be a multiline string and can also use markdown and latex equations inside $$, like $x + 3$.
path
Path to the run. Run paths include entity, project, and run ID, in the format entity/project/run_id.
project
Name of the W&B project associated with the run.
project_url
URL of the W&B project associated with the run, if there is one. Offline runs do not have a project URL.
resumed
True if the run was resumed, False otherwise.
settings
A frozen copy of run’s Settings object.
start_time
Unix timestamp (in seconds) of when the run started.
starting_step
The first step of the run.
step
Current value of the step. This counter is incremented by wandb.log.
sweep_id
Identifier for the sweep associated with the run, if there is one.
sweep_url
URL of the sweep associated with the run, if there is one. Offline runs do not have a sweep URL.
tags
Tags associated with the run, if there are any.
url
The url for the W&B run, if there is one. Offline runs will not have a url.
The name of another metric to serve as the X-axis for this metric in automatically generated charts.
step_sync
Automatically insert the last value of step_metric into run.log() if it is not provided explicitly. Defaults to True if step_metric is specified.
hidden
Hide this metric from automatic plots.
summary
Specify aggregate metrics added to summary. Supported aggregations include “min”, “max”, “mean”, “last”, “best”, “copy” and “none”. “best” is used together with the goal parameter. “none” prevents a summary from being generated. “copy” is deprecated and should not be used.
goal
Specify how to interpret the “best” summary type. Supported options are “minimize” and “maximize”.
overwrite
If false, then this call is merged with previous define_metric calls for the same metric by using their values for any unspecified parameters. If true, then unspecified parameters overwrite values specified by previous calls.
Returns
An object that represents this call but can otherwise be discarded.
Marks the completion of a W&B run and ensures all data is synced to the server.
The run’s final state is determined by its exit conditions and sync status.
Run States:
Running: Active run that is logging data and/or sending heartbeats.
Crashed: Run that stopped sending heartbeats unexpectedly.
Finished: Run completed successfully (exit_code=0) with all data synced.
Failed: Run completed with errors (exit_code!=0).
Args
exit_code
Integer indicating the run’s exit status. Use 0 for success, any other value marks the run as failed.
quiet
Deprecated. Configure logging verbosity using wandb.Settings(quiet=...).
Finishes a non-finalized artifact as output of a run.
Subsequent “upserts” with the same distributed ID will result in a new version.
Args
artifact_or_path
(str or Artifact) A path to the contents of this artifact, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path You can also pass an Artifact object created by calling wandb.Artifact.
name
(str, optional) An artifact name. May be prefixed with entity/project. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified.
type
(str) The type of artifact to log, examples include dataset, model
aliases
(list, optional) Aliases to apply to this artifact, defaults to ["latest"]
distributed_id
(string, optional) Unique string that all distributed jobs share. If None, defaults to the run’s group name.
Link the given artifact to a portfolio (a promoted collection of artifacts).
The linked artifact will be visible in the UI for the specified portfolio.
Args
artifact
the (public or local) artifact which will be linked
target_path
str - takes the following forms: {portfolio}, {project}/{portfolio}, or {entity}/{project}/{portfolio}
aliases
List[str] - optional alias(es) that will only be applied on this linked artifact inside the portfolio. The alias “latest” will always be applied to the latest version of an artifact that is linked.
Log a model artifact version and link it to a registered model in the model registry.
The linked model version will be visible in the UI for the specified registered model.
Steps:
Check if ’name’ model artifact has been logged. If so, use the artifact version that matches the files
located at ‘path’ or log a new version. Otherwise log files under ‘path’ as a new model artifact, ’name’
of type ‘model’.
Check if registered model with name ‘registered_model_name’ exists in the ‘model-registry’ project.
If not, create a new registered model with name ‘registered_model_name’.
Link version of model artifact ’name’ to registered model, ‘registered_model_name’.
Attach aliases from ‘aliases’ list to the newly linked model artifact version.
Args
path
(str) A path to the contents of this model, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path
registered_model_name
(str) - the name of the registered model that the model is to be linked to. A registered model is a collection of model versions linked to the model registry, typically representing a team’s specific ML Task. The entity that this registered model belongs to will be derived from the run
name
(str, optional) - the name of the model artifact that files in ‘path’ will be logged to. This will default to the basename of the path prepended with the current run id if not specified.
aliases
(List[str], optional) - alias(es) that will only be applied on this linked artifact inside the registered model. The alias “latest” will always be applied to the latest version of an artifact that is linked.
Use log to log data from runs, such as scalars, images, video,
histograms, plots, and tables.
See our guides to logging for
live examples, code snippets, best practices, and more.
The most basic usage is run.log({"train-loss": 0.5, "accuracy": 0.9}).
This will save the loss and accuracy to the run’s history and update
the summary values for these metrics.
Visualize logged data in the workspace at wandb.ai,
or locally on a self-hosted instance
of the W&B app, or export data to visualize and explore locally, e.g. in
Jupyter notebooks, with our API.
Logged values don’t have to be scalars. Logging any wandb object is supported.
For example run.log({"example": wandb.Image("myimage.jpg")}) will log an
example image which will be displayed nicely in the W&B UI.
See the reference documentation
for all of the different supported types or check out our
guides to logging for examples,
from 3D molecular structures and segmentation masks to PR curves and histograms.
You can use wandb.Table to log structured data. See our
guide to logging tables
for details.
The W&B UI organizes metrics with a forward slash (/) in their name
into sections named using the text before the final slash. For example,
the following results in two sections named “train” and “validate”:
Only one level of nesting is supported; run.log({"a/b/c": 1})
produces a section named “a/b”.
run.log is not intended to be called more than a few times per second.
For optimal performance, limit your logging to once every N iterations,
or collect data over multiple iterations and log it in a single step.
The W&B step
With basic usage, each call to log creates a new “step”.
The step must always increase, and it is not possible to log
to a previous step.
Note that you can use any metric as the X axis in charts.
In many cases, it is better to treat the W&B step like
you’d treat a timestamp rather than a training step.
# Example: log an "epoch" metric for use as an X axis.
run.log({"epoch": 40, "train-loss": 0.5})
A dict with str keys and values that are serializable Python objects including: int, float and string; any of the wandb.data_types; lists, tuples and NumPy arrays of serializable Python objects; other dicts of this structure.
step
The step number to log. If None, then an implicit auto-incrementing step is used. See the notes in the description.
commit
If true, finalize and upload the step. If false, then accumulate data for the step. See the notes in the description. If step is None, then the default is commit=True; otherwise, the default is commit=False.
import wandb
run = wandb.init()
run.log({"accuracy": 0.9, "epoch": 5})
Incremental logging
import wandb
run = wandb.init()
run.log({"loss": 0.2}, commit=False)
# Somewhere else when I'm ready to report this step:run.log({"accuracy": 0.8})
Histogram
import numpy as np
import wandb
# sample gradients at random from normal distributiongradients = np.random.randn(100, 100)
run = wandb.init()
run.log({"gradients": wandb.Histogram(gradients)})
Image from numpy
import numpy as np
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(low=0, high=256, size=(100, 100, 3))
image = wandb.Image(pixels, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Image from PIL
import numpy as np
from PIL import Image as PILImage
import wandb
run = wandb.init()
examples = []
for i in range(3):
pixels = np.random.randint(
low=0,
high=256,
size=(100, 100, 3),
dtype=np.uint8,
)
pil_image = PILImage.fromarray(pixels, mode="RGB")
image = wandb.Image(pil_image, caption=f"random field {i}")
examples.append(image)
run.log({"examples": examples})
Video from numpy
import numpy as np
import wandb
run = wandb.init()
# axes are (time, channel, height, width)frames = np.random.randint(
low=0,
high=256,
size=(10, 3, 100, 100),
dtype=np.uint8,
)
run.log({"video": wandb.Video(frames, fps=4)})
Matplotlib Plot
from matplotlib import pyplot as plt
import numpy as np
import wandb
run = wandb.init()
fig, ax = plt.subplots()
x = np.linspace(0, 10)
y = x * x
ax.plot(x, y) # plot y = x^2run.log({"chart": fig})
PR Curve
import wandb
run = wandb.init()
run.log({"pr": wandb.plot.pr_curve(y_test, y_probas, labels)})
(str or Artifact) A path to the contents of this artifact, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path You can also pass an Artifact object created by calling wandb.Artifact.
name
(str, optional) An artifact name. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified.
type
(str) The type of artifact to log, examples include dataset, model
aliases
(list, optional) Aliases to apply to this artifact, defaults to ["latest"]
tags
(list, optional) Tags to apply to this artifact, if any.
Save the current state of your code to a W&B Artifact.
By default, it walks the current directory and logs all files that end with .py.
Args
root
The relative (to os.getcwd()) or absolute path to recursively find code from.
name
(str, optional) The name of our code artifact. By default, we’ll name the artifact source-$PROJECT_ID-$ENTRYPOINT_RELPATH. There may be scenarios where you want many runs to share the same artifact. Specifying name allows you to achieve that.
include_fn
A callable that accepts a file path and (optionally) root path and returns True when it should be included and False otherwise. This defaults to: lambda path, root: path.endswith(".py")
exclude_fn
A callable that accepts a file path and (optionally) root path and returns True when it should be excluded and False otherwise. This defaults to a function that excludes all files within <root>/.wandb/ and <root>/wandb/ directories.
Logs a model artifact containing the contents inside the ‘path’ to a run and marks it as an output to this run.
Args
path
(str) A path to the contents of this model, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path
name
(str, optional) A name to assign to the model artifact that the file contents will be added to. The string must contain only the following alphanumeric characters: dashes, underscores, and dots. This will default to the basename of the path prepended with the current run id if not specified.
aliases
(list, optional) Aliases to apply to the created model artifact, defaults to ["latest"]
Relative paths are relative to the current working directory.
A Unix glob, such as “myfiles/*”, is expanded at the time save is
called regardless of the policy. In particular, new files are not
picked up automatically.
A base_path may be provided to control the directory structure of
uploaded files. It should be a prefix of glob_str, and the directory
structure beneath it is preserved. It’s best understood through
examples:
wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.
wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.
wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.
wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.
wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
# of "files/".
Note: when given an absolute path or glob and no base_path, one
directory level is preserved as in the example above.
Args
glob_str
A relative or absolute path or Unix glob.
base_path
A path to use to infer a directory structure; see examples.
policy
One of live, now, or end. * live: upload the file as it changes, overwriting the previous version * now: upload the file once now * end: upload file when the run ends
Returns
Paths to the symlinks created for the matched files. For historical reasons, this may return a boolean in legacy code.
Declare (or append to) a non-finalized artifact as output of a run.
Note that you must call run.finish_artifact() to finalize the artifact.
This is useful when distributed jobs need to all contribute to the same artifact.
Args
artifact_or_path
(str or Artifact) A path to the contents of this artifact, can be in the following forms: - /local/directory - /local/directory/file.txt - s3://bucket/path You can also pass an Artifact object created by calling wandb.Artifact.
name
(str, optional) An artifact name. May be prefixed with entity/project. Valid names can be in the following forms: - name:version - name:alias - digest This will default to the basename of the path prepended with the current run id if not specified.
type
(str) The type of artifact to log, examples include dataset, model
aliases
(list, optional) Aliases to apply to this artifact, defaults to ["latest"]
distributed_id
(string, optional) Unique string that all distributed jobs share. If None, defaults to the run’s group name.
Call download or file on the returned object to get the contents locally.
Args
artifact_or_name
(str or Artifact) An artifact name. May be prefixed with project/ or entity/project/. If no entity is specified in the name, the Run or API setting’s entity is used. Valid names can be in the following forms: - name:version - name:alias You can also pass an Artifact object created by calling wandb.Artifact
type
(str, optional) The type of artifact to use.
aliases
(list, optional) Aliases to apply to this artifact
use_as
(string, optional) Optional string indicating what purpose the artifact was used with. Will be shown in UI.
Download the files logged in a model artifact ’name’.
Args
name
(str) A model artifact name. ’name’ must match the name of an existing logged model artifact. May be prefixed with entity/project/. Valid names can be in the following forms: - model_artifact_name:version - model_artifact_name:alias
Hooks into the given PyTorch model(s) to monitor gradients and the model’s computational graph.
This function can track parameters, gradients, or both during training. It should be
extended to support arbitrary machine learning models in the future.
Args
models (Union[torch.nn.Module, Sequence[torch.nn.Module]]): A single model or a sequence of models to be monitored. criterion (Optional[torch.F]): The loss function being optimized (optional). log (Optional[Literal[“gradients”, “parameters”, “all”]]): Specifies whether to log “gradients”, “parameters”, or “all”. Set to None to disable logging. (default=“gradients”) log_freq (int): Frequency (in batches) to log gradients and parameters. (default=1000) idx (Optional[int]): Index used when tracking multiple models with wandb.watch. (default=None) log_graph (bool): Whether to log the model’s computational graph. (default=False)
Raises
ValueError
If wandb.init has not been called or if any of the models are not instances of torch.nn.Module.
Relative paths are relative to the current working directory.
A Unix glob, such as “myfiles/*”, is expanded at the time save is
called regardless of the policy. In particular, new files are not
picked up automatically.
A base_path may be provided to control the directory structure of
uploaded files. It should be a prefix of glob_str, and the directory
structure beneath it is preserved. It’s best understood through
examples:
wandb.save("these/are/myfiles/*")
# => Saves files in a "these/are/myfiles/" folder in the run.
wandb.save("these/are/myfiles/*", base_path="these")
# => Saves files in an "are/myfiles/" folder in the run.
wandb.save("/User/username/Documents/run123/*.txt")
# => Saves files in a "run123/" folder in the run. See note below.
wandb.save("/User/username/Documents/run123/*.txt", base_path="/User")
# => Saves files in a "username/Documents/run123/" folder in the run.
wandb.save("files/*/saveme.txt")
# => Saves each "saveme.txt" file in an appropriate subdirectory
# of "files/".
Note: when given an absolute path or glob and no base_path, one
directory level is preserved as in the example above.
Args
glob_str
A relative or absolute path or Unix glob.
base_path
A path to use to infer a directory structure; see examples.
policy
One of live, now, or end. * live: upload the file as it changes, overwriting the previous version * now: upload the file once now * end: upload file when the run ends
Returns
Paths to the symlinks created for the matched files. For historical reasons, this may return a boolean in legacy code.
Search for hyperparameters that optimizes a cost function
of a machine learning model by testing various combinations.
Make note the unique identifier, sweep_id, that is returned.
At a later step provide the sweep_id to a sweep agent.
Args
sweep
The configuration of a hyperparameter search. (or configuration generator). See Sweep configuration structure for information on how to define your sweep. If you provide a callable, ensure that the callable does not take arguments and that it returns a dictionary that conforms to the W&B sweep config spec.
entity
The username or team name where you want to send W&B runs created by the sweep to. Ensure that the entity you specify already exists. If you don’t specify an entity, the run will be sent to your default entity, which is usually your username.
project
The name of the project where W&B runs created from the sweep are sent to. If the project is not specified, the run is sent to a project labeled ‘Uncategorized’.
prior_runs
The run IDs of existing runs to add to this sweep.
title (Optional[str]): The text that appears at the top of the plot.
metrics (LList[MetricType]): orientation Literal[“v”, “h”]: The orientation of the bar plot. Set to either vertical (“v”) or horizontal (“h”). Defaults to horizontal (“h”).
range_x (Tuple[float | None, float | None]): Tuple that specifies the range of the x-axis.
title_x (Optional[str]): The label of the x-axis.
title_y (Optional[str]): The label of the y-axis.
groupby (Optional[str]): Group runs based on a metric logged to your W&B project that the report pulls information from.
groupby_aggfunc (Optional[GroupAgg]): Aggregate runs with specified function. Options include mean, min, max, median, sum, samples, or None.
groupby_rangefunc (Optional[GroupArea]): Group runs based on a range. Options include minmax, stddev, stderr, none, =samples, or None.
max_runs_to_show (Optional[int]): The maximum number of runs to show on the plot.
max_bars_to_show (Optional[int]): The maximum number of bars to show on the bar plot.
custom_expressions (Optional[LList[str]]): A list of custom expressions to be used in the bar plot.
legend_template (Optional[str]): The template for the legend.
font_size ( Optional[FontSize]): The size of the line plot’s font. Options include small, medium, large, auto, or None.
line_titles (Optional[dict]): The titles of the lines. The keys are the line names and the values are the titles.
line_colors (Optional[dict]): The colors of the lines. The keys are the line names and the values are the colors.
classBlockQuote
A block of quoted text.
Attributes:
text (str): The text of the block quote.
classCalloutBlock
A block of callout text.
Attributes:
text (str): The callout text.
classCheckedList
A list of items with checkboxes. Add one or more CheckedListItem within CheckedList.
Attributes:
items (LList[CheckedListItem]): A list of one or more CheckedListItem objects.
classCheckedListItem
A list item with a checkbox. Add one or more CheckedListItem within CheckedList.
Attributes:
text (str): The text of the list item.
checked (bool): Whether the checkbox is checked. By default, set to False.
classCodeBlock
A block of code.
Attributes:
code (str): The code in the block.
language (Optional[Language]): The language of the code. Language specified is used for syntax highlighting. By default, set to python. Options include javascript, python, css, json, html, markdown, yaml.
classCodeComparer
A panel object that compares the code between two different runs.
Attributes:
diff(Literal['split', 'unified']): How to display code differences. Options include split and unified.
classConfig
Metrics logged to a run’s config object. Config objects are commonly logged using run.config[name] = ... or passing a config as a dictionary of key-value pairs, where the key is the name of the metric and the value is the value of that metric.
Attributes:
name (str): The name of the metric.
classCustomChart
A panel that shows a custom chart. The chart is defined by a weave query.
Attributes:
query (dict): The query that defines the custom chart. The key is the name of the field, and the value is the query.
chart_name (str): The title of the custom chart.
chart_fields (dict): Key-value pairs that define the axis of the plot. Where the key is the label, and the value is the metric.
chart_strings (dict): Key-value pairs that define the strings in the chart.
chart_fields (dict): The fields to display in the chart.
chart_strings (dict): The strings to display in the chart.
classGallery
A block that renders a gallery of reports and URLs.
Attributes:
items (List[Union[GalleryReport, GalleryURL]]): A list of GalleryReport and GalleryURL objects.
classGalleryReport
A reference to a report in the gallery.
Attributes:
report_id (str): The ID of the report.
classGalleryURL
A URL to an external resource.
Attributes:
url (str): The URL of the resource.
title (Optional[str]): The title of the resource.
description (Optional[str]): The description of the resource.
image_url (Optional[str]): The URL of an image to display.
classGradientPoint
A point in a gradient.
Attributes:
color: The color of the point.
offset: The position of the point in the gradient. The value should be between 0 and 100.
classH1
An H1 heading with the text specified.
Attributes:
text (str): The text of the heading.
collapsed_blocks (Optional[LList[“BlockTypes”]]): The blocks to show when the heading is collapsed.
classH2
An H2 heading with the text specified.
Attributes:
text (str): The text of the heading.
collapsed_blocks (Optional[LList[“BlockTypes”]]): One or more blocks to show when the heading is collapsed.
classH3
An H3 heading with the text specified.
Attributes:
text (str): The text of the heading.
collapsed_blocks (Optional[LList[“BlockTypes”]]): One or more blocks to show when the heading is collapsed.
classHeading
classHorizontalRule
HTML horizontal line.
classImage
A block that renders an image.
Attributes:
url (str): The URL of the image.
caption (str): The caption of the image. Caption appears underneath the image.
classInlineCode
Inline code. Does not add newline character after code.
Attributes:
text (str): The code you want to appear in the report.
classInlineLatex
Inline LaTeX markdown. Does not add newline character after the LaTeX markdown.
Attributes:
text (str): LaTeX markdown you want to appear in the report.
classLatexBlock
A block of LaTeX text.
Attributes:
text (str): The LaTeX text.
classLayout
The layout of a panel in a report. Adjusts the size and position of the panel.
Attributes:
x (int): The x position of the panel.
y (int): The y position of the panel.
w (int): The width of the panel.
h (int): The height of the panel.
classLinePlot
A panel object with 2D line plots.
Attributes:
title (Optional[str]): The text that appears at the top of the plot.
x (Optional[MetricType]): The name of a metric logged to your W&B project that the report pulls information from. The metric specified is used for the x-axis.
y (LList[MetricType]): One or more metrics logged to your W&B project that the report pulls information from. The metric specified is used for the y-axis.
range_x (Tuple[float | None, float | None]): Tuple that specifies the range of the x-axis.
range_y (Tuple[float | None, float | None]): Tuple that specifies the range of the y-axis.
log_x (Optional[bool]): Plots the x-coordinates using a base-10 logarithmic scale.
log_y (Optional[bool]): Plots the y-coordinates using a base-10 logarithmic scale.
title_x (Optional[str]): The label of the x-axis.
title_y (Optional[str]): The label of the y-axis.
ignore_outliers (Optional[bool]): If set to True, do not plot outliers.
groupby (Optional[str]): Group runs based on a metric logged to your W&B project that the report pulls information from.
groupby_aggfunc (Optional[GroupAgg]): Aggregate runs with specified function. Options include mean, min, max, median, sum, samples, or None.
groupby_rangefunc (Optional[GroupArea]): Group runs based on a range. Options include minmax, stddev, stderr, none, samples, or None.
smoothing_factor (Optional[float]): The smoothing factor to apply to the smoothing type. Accepted values range between 0 and 1.
smoothing_type Optional[SmoothingType]: Apply a filter based on the specified distribution. Options include exponentialTimeWeighted, exponential, gaussian, average, or none.
smoothing_show_original (Optional[bool]): If set to True, show the original data.
max_runs_to_show (Optional[int]): The maximum number of runs to show on the line plot.
custom_expressions (Optional[LList[str]]): Custom expressions to apply to the data.
plot_type Optional[LinePlotStyle]: The type of line plot to generate. Options include line, stacked-area, or pct-area.
font_size Optional[FontSize]: The size of the line plot’s font. Options include small, medium, large, auto, or None.
legend_position Optional[LegendPosition]: Where to place the legend. Options include north, south, east, west, or None.
legend_template (Optional[str]): The template for the legend.
aggregate (Optional[bool]): If set to True, aggregate the data.
xaxis_expression (Optional[str]): The expression for the x-axis.
legend_fields (Optional[LList[str]]): The fields to include in the legend.
classLink
A link to a URL.
Attributes:
text (Union[str, TextWithInlineComments]): The text of the link.
url (str): The URL the link points to.
classMarkdownBlock
A block of markdown text. Useful if you want to write text that uses common markdown syntax.
Attributes:
text (str): The markdown text.
classMarkdownPanel
A panel that renders markdown.
Attributes:
markdown (str): The text you want to appear in the markdown panel.
classMediaBrowser
A panel that displays media files in a grid layout.
Attributes:
num_columns (Optional[int]): The number of columns in the grid.
media_keys (LList[str]): A list of media keys that correspond to the media files.
classMetric
A metric to display in a report that is logged in your project.
Attributes:
name (str): The name of the metric.
classOrderBy
A metric to order by.
Attributes:
name (str): The name of the metric.
ascending (bool): Whether to sort in ascending order. By default set to False.
classOrderedList
A list of items in a numbered list.
Attributes:
items (LList[str]): A list of one or more OrderedListItem objects.
classOrderedListItem
A list item in an ordered list.
Attributes:
text (str): The text of the list item.
classP
A paragraph of text.
Attributes:
text (str): The text of the paragraph.
classPanel
A panel that displays a visualization in a panel grid.
Attributes:
layout (Layout): A Layout object.
classPanelGrid
A grid that consists of runsets and panels. Add runsets and panels with Runset and Panel objects, respectively.
runsets (LList[“Runset”]): A list of one or more Runset objects.
panels (LList[“PanelTypes”]): A list of one or more Panel objects.
active_runset (int): The number of runs you want to display within a runset. By default, it is set to 0.
custom_run_colors (dict): Key-value pairs where the key is the name of a run and the value is a color specified by a hexadecimal value.
classParallelCoordinatesPlot
A panel object that shows a parallel coordinates plot.
Attributes:
columns (LList[ParallelCoordinatesPlotColumn]): A list of one or more ParallelCoordinatesPlotColumn objects.
title (Optional[str]): The text that appears at the top of the plot.
gradient (Optional[LList[GradientPoint]]): A list of gradient points.
font_size (Optional[FontSize]): The size of the line plot’s font. Options include small, medium, large, auto, or None.
classParallelCoordinatesPlotColumn
A column within a parallel coordinates plot. The order of metrics specified determine the order of the parallel axis (x-axis) in the parallel coordinates plot.
Attributes:
metric (str | Config | SummaryMetric): The name of the metric logged to your W&B project that the report pulls information from.
display_name (Optional[str]): The name of the metric
inverted (Optional[bool]): Whether to invert the metric.
log (Optional[bool]): Whether to apply a log transformation to the metric.
classParameterImportancePlot
A panel that shows how important each hyperparameter is in predicting the chosen metric.
Attributes:
with_respect_to (str): The metric you want to compare the parameter importance against. Common metrics might include the loss, accuracy, and so forth. The metric you specify must be logged within the project that the report pulls information from.
classReport
An object that represents a W&B Report. Use the returned object’s blocks attribute to customize your report. Report objects do not automatically save. Use the save() method to persists changes.
Attributes:
project (str): The name of the W&B project you want to load in. The project specified appears in the report’s URL.
entity (str): The W&B entity that owns the report. The entity appears in the report’s URL.
title (str): The title of the report. The title appears at the top of the report as an H1 heading.
description (str): A description of the report. The description appears underneath the report’s title.
blocks (LList[BlockTypes]): A list of one or more HTML tags, plots, grids, runsets, and more.
width (Literal[‘readable’, ‘fixed’, ‘fluid’]): The width of the report. Options include ‘readable’, ‘fixed’, ‘fluid’.
property url
The URL where the report is hosted. The report URL consists of https://wandb.ai/{entity}/{project_name}/reports/. Where {entity} and {project_name} consists of the entity that the report belongs to and the name of the project, respectively.
classmethodfrom_url
from_url(url: str, as_model: bool =False)
Load in the report into current environment. Pass in the URL where the report is hosted.
Arguments:
url (str): The URL where the report is hosted.
as_model (bool): If True, return the model object instead of the Report object. By default, set to False.
methodsave
save(draft: bool =False, clone: bool =False)
Persists changes made to a report object.
methodto_html
to_html(height: int =1024, hidden: bool =False) → str
Generate HTML containing an iframe displaying this report. Commonly used to within a Python notebook.
Arguments:
height (int): Height of the iframe.
hidden (bool): If True, hide the iframe. Default set to False.
classRunComparer
A panel that compares metrics across different runs from the project the report pulls information from.
Attributes:
diff_only(Optional[Literal["split", True]]): Display only the difference across runs in a project. You can toggle this feature on and off in the W&B Report UI.
classRunset
A set of runs to display in a panel grid.
Attributes:
entity (str): An entity that owns or has the correct permissions to the project where the runs are stored.
project (str): The name of the project were the runs are stored.
name (str): The name of the run set. Set to Run set by default.
query (str): A query string to filter runs.
filters (Optional[str]): A filter string to filter runs.
groupby (LList[str]): A list of metric names to group by.
order (LList[OrderBy]): A list of OrderBy objects to order by.
custom_run_colors (LList[OrderBy]): A dictionary mapping run IDs to colors.
classRunsetGroup
UI element that shows a group of runsets.
Attributes:
runset_name (str): The name of the runset.
keys (Tuple[RunsetGroupKey, …]): The keys to group by. Pass in one or more RunsetGroupKey objects to group by.
classRunsetGroupKey
Groups runsets by a metric type and value. Part of a RunsetGroup. Specify the metric type and value to group by as key-value pairs.
Attributes:
key (Type[str] | Type[Config] | Type[SummaryMetric] | Type[Metric]): The metric type to group by.
value (str): The value of the metric to group by.
classScalarChart
A panel object that shows a scalar chart.
Attributes:
title (Optional[str]): The text that appears at the top of the plot.
metric (MetricType): The name of a metric logged to your W&B project that the report pulls information from.
groupby_aggfunc (Optional[GroupAgg]): Aggregate runs with specified function. Options include mean, min, max, median, sum, samples, or None.
groupby_rangefunc (Optional[GroupArea]): Group runs based on a range. Options include minmax, stddev, stderr, none, samples, or None.
custom_expressions (Optional[LList[str]]): A list of custom expressions to be used in the scalar chart.
legend_template (Optional[str]): The template for the legend.
font_size Optional[FontSize]: The size of the line plot’s font. Options include small, medium, large, auto, or None.
classScatterPlot
A panel object that shows a 2D or 3D scatter plot.
Arguments:
title (Optional[str]): The text that appears at the top of the plot.
x Optional[SummaryOrConfigOnlyMetric]: The name of a metric logged to your W&B project that the report pulls information from. The metric specified is used for the x-axis.
y Optional[SummaryOrConfigOnlyMetric]: One or more metrics logged to your W&B project that the report pulls information from. Metrics specified are plotted within the y-axis. z Optional[SummaryOrConfigOnlyMetric]:
range_x (Tuple[float | None, float | None]): Tuple that specifies the range of the x-axis.
range_y (Tuple[float | None, float | None]): Tuple that specifies the range of the y-axis.
range_z (Tuple[float | None, float | None]): Tuple that specifies the range of the z-axis.
log_x (Optional[bool]): Plots the x-coordinates using a base-10 logarithmic scale.
log_y (Optional[bool]): Plots the y-coordinates using a base-10 logarithmic scale.
log_z (Optional[bool]): Plots the z-coordinates using a base-10 logarithmic scale.
running_ymin (Optional[bool]): Apply a moving average or rolling mean.
running_ymax (Optional[bool]): Apply a moving average or rolling mean.
running_ymean (Optional[bool]): Apply a moving average or rolling mean.
legend_template (Optional[str]): A string that specifies the format of the legend.
gradient (Optional[LList[GradientPoint]]): A list of gradient points that specify the color gradient of the plot.
font_size (Optional[FontSize]): The size of the line plot’s font. Options include small, medium, large, auto, or None.
regression (Optional[bool]): If True, a regression line is plotted on the scatter plot.
classSoundCloud
A block that renders a SoundCloud player.
Attributes:
html (str): The HTML code to embed the SoundCloud player.
classSpotify
A block that renders a Spotify player.
Attributes:
spotify_id (str): The Spotify ID of the track or playlist.
classSummaryMetric
A summary metric to display in a report.
Attributes:
name (str): The name of the metric.
classTableOfContents
A block that contains a list of sections and subsections using H1, H2, and H3 HTML blocks specified in a report.
classTextWithInlineComments
A block of text with inline comments.
Attributes:
text (str): The text of the block.
classTwitter
A block that displays a Twitter feed.
Attributes:
html (str): The HTML code to display the Twitter feed.
classUnorderedList
A list of items in a bulleted list.
Attributes:
items (LList[str]): A list of one or more UnorderedListItem objects.
classUnorderedListItem
A list item in an unordered list.
Attributes:
text (str): The text of the list item.
classVideo
A block that renders a video.
Attributes:
url (str): The URL of the video.
classWeaveBlockArtifact
A block that shows an artifact logged to W&B. The query takes the form of
Python library for programmatically working with W&B Workspace API.
# How to importimport wandb_workspaces.workspaces as ws
# Example of creating a workspacews.Workspace(
name="Example W&B Workspace",
entity="entity", # entity that owns the workspace project="project", # project that the workspace is associated with sections=[
ws.Section(
name="Validation Metrics",
panels=[
wr.LinePlot(x="Step", y=["val_loss"]),
wr.BarPlot(metrics=["val_accuracy"]),
wr.ScalarChart(metric="f1_score", groupby_aggfunc="mean"),
],
is_open=True,
),
],
)
workspace.save()
classRunSettings
Settings for a run in a runset (left hand bar).
Attributes:
color (str): The color of the run in the UI. Can be hex (#ff0000), css color (red), or rgb (rgb(255, 0, 0))
disabled (bool): Whether the run is deactivated (eye closed in the UI). Default is set to False.
classRunsetSettings
Settings for the runset (the left bar containing runs) in a workspace.
Attributes:
query (str): A query to filter the runset (can be a regex expr, see next param).
regex_query (bool): Controls whether the query (above) is a regex expr. Default is set to False.
filters(LList[expr.FilterExpr]): A list of filters to apply to the runset. Filters are AND’d together. See FilterExpr for more information on creating filters.
groupby(LList[expr.MetricType]): A list of metrics to group by in the runset. Set to Metric, Summary, Config, Tags, or KeysInfo.
order(LList[expr.Ordering]): A list of metrics and ordering to apply to the runset.
run_settings(Dict[str, RunSettings]): A dictionary of run settings, where the key is the run’s ID and the value is a RunSettings object.
classSection
Represents a section in a workspace.
Attributes:
name (str): The name/title of the section.
panels(LList[PanelTypes]): An ordered list of panels in the section. By default, first is top-left and last is bottom-right.
is_open (bool): Whether the section is open or closed. Default is closed.
layout_settings(Literal[standard, custom]): Settings for panel layout in the section.
panel_settings: Panel-level settings applied to all panels in the section, similar to WorkspaceSettings for a Section.
classSectionLayoutSettings
Panel layout settings for a section, typically seen at the top right of the section of the W&B App Workspace UI.
Attributes:
layout(Literal[standard, custom]): The layout of panels in the section. standard follows the default grid layout, custom allows per per-panel layouts controlled by the individual panel settings.
columns (int): In a standard layout, the number of columns in the layout. Default is 3.
rows (int): In a standard layout, the number of rows in the layout. Default is 2.
classSectionPanelSettings
Panel settings for a section, similar to WorkspaceSettings for a section.
Settings applied here can be overrided by more granular Panel settings in this priority: Section < Panel.
Attributes:
x_axis (str): X-axis metric name setting. By default, set to Step.
x_min Optional[float]: Minimum value for the x-axis.
x_max Optional[float]: Maximum value for the x-axis.
smoothing_type (Literal[’exponentialTimeWeighted’, ’exponential’, ‘gaussian’, ‘average’, ’none’]): Smoothing type applied to all panels.
smoothing_weight (int): Smoothing weight applied to all panels.
classWorkspace
Represents a W&B workspace, including sections, settings, and config for run sets.
Attributes:
entity (str): The entity this workspace will be saved to (usually user or team name).
project (str): The project this workspace will be saved to.
name: The name of the workspace.
sections(LList[Section]): An ordered list of sections in the workspace. The first section is at the top of the workspace.
settings(WorkspaceSettings): Settings for the workspace, typically seen at the top of the workspace in the UI.
runset_settings(RunsetSettings): Settings for the runset (the left bar containing runs) in a workspace.
property url
The URL to the workspace in the W&B app.
classmethodfrom_url
from_url(url: str)
Get a workspace from a URL.
methodsave
save()
Save the current workspace to W&B.
Returns:
Workspace: The updated workspace with the saved internal name and ID.
methodsave_as_new_view
save_as_new_view()
Save the current workspace as a new view to W&B.
Returns:
Workspace: The updated workspace with the saved internal name and ID.
classWorkspaceSettings
Settings for the workspace, typically seen at the top of the workspace in the UI.
This object includes settings for the x-axis, smoothing, outliers, panels, tooltips, runs, and panel query bar.
Settings applied here can be overrided by more granular Section and Panel settings in this priority: Workspace < Section < Panel
Attributes:
x_axis (str): X-axis metric name setting.
x_min(Optional[float]): Minimum value for the x-axis.
x_max(Optional[float]): Maximum value for the x-axis.
smoothing_type(Literal['exponentialTimeWeighted', 'exponential', 'gaussian', 'average', 'none']): Smoothing type applied to all panels.
smoothing_weight (int): Smoothing weight applied to all panels.
ignore_outliers (bool): Ignore outliers in all panels.
sort_panels_alphabetically (bool): Sorts panels in all sections alphabetically.
group_by_prefix(Literal[first, last]): Group panels by the first or up to last prefix (first or last). Default is set to last.
remove_legends_from_panels (bool): Remove legends from all panels.
tooltip_number_of_runs(Literal[default, all, none]): The number of runs to show in the tooltip.
tooltip_color_run_names (bool): Whether to color run names in the tooltip to match the runset (True) or not (False). Default is set to True.
max_runs (int): The maximum number of runs to show per panel (this will be the first 10 runs in the runset).
point_visualization_method(Literal[line, point, line_point]): The visualization method for points.
panel_search_query (str): The query for the panel search bar (can be a regex expression).
auto_expand_panel_search_results (bool): Whether to auto expand the panel search results.
This function can track parameters, gradients, or both during training. It should be
extended to support arbitrary machine learning models in the future.
Args
models (Union[torch.nn.Module, Sequence[torch.nn.Module]]): A single model or a sequence of models to be monitored. criterion (Optional[torch.F]): The loss function being optimized (optional). log (Optional[Literal[“gradients”, “parameters”, “all”]]): Specifies whether to log “gradients”, “parameters”, or “all”. Set to None to disable logging. (default=“gradients”) log_freq (int): Frequency (in batches) to log gradients and parameters. (default=1000) idx (Optional[int]): Index used when tracking multiple models with wandb.watch. (default=None) log_graph (bool): Whether to log the model’s computational graph. (default=False)
Raises
ValueError
If wandb.init has not been called or if any of the models are not instances of torch.nn.Module.
5 - Query Expression Language
Use the query expressions to select and aggregate data across runs and projects.
Learn more about query panels.
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.
Converts a number to a timestamp. Values less than 31536000000 will be converted to seconds, values less than 31536000000000 will be converted to milliseconds, values less than 31536000000000000 will be converted to microseconds, and values less than 31536000000000000000 will be converted to nanoseconds.