The AdoptOpenJDK provides prebuilt binaries from OpenJDK class libraries. All AdoptOpenJDK binaries and scripts are open source licensed and available for free. The version of the AdoptOpenJDK distributed with the IDV is 1.8.0_422. AdoptOpenJDK 1.8.0_422 introduces several updates and enhancements, particularly targeting security, compatibility, and performance. See the adoptOpenJDK Library for a more details information.
The version of the netCDF-Java library currently distributed with the IDV is the 5.5.3. See the netCDF-Java Library for a more details information.
The IDV provides a suite of tools for transforming raw datasets, making them suitable for applications like machine learning. Users can apply various scalers (Standard, Robust, Min-Max), transformers (Quantile, Power), and normalizers to improve data quality and align distributions with model requirements. The platform also includes statistical tools for calculating averages, percentiles, and distribution transformations, helping users visualize and refine data as needed. Results can be exported in formats like CSV or netCDF, making it easy to integrate IDV-processed data into workflows for machine learning, scientific modeling, and more. This flexibility allows users to efficiently prepare data for diverse analytical needs.
With the newly developed Level 2 radar grid display feature, we've expanded its capabilities by incorporating derived formulas to calculate radar precipitation rates. These calculations are based on two key approaches: the Marshall-Palmer drop size distribution and dual polarization radar data. The Marshall-Palmer method provides a traditional estimate of precipitation rates using reflectivity, while the dual polarization approach enhances accuracy by factoring in both reflectivity and differential reflectivity. These advancements allow for more precise and varied precipitation rate calculations, improving radar data interpretation and weather analysis.
After creating the first vertical cross-sections display, you can add a contour cross-section display for a second variable as well as a wind vector cross-section display for a derived variable. When working multi variables cross-section displays, we recommend using a color-filled contour display or a color-shaded display, and contour displays for the second or the third variables. Multi-variable cross-section displays offer several advantages in data visualization and analysis, such as enhanced data comparisons, comprehensive analysis, and efficient use of screen space. Additionally, you can now switch the vertical coordinate scale from meters to pressure in hPa, providing greater flexibility in interpreting the data.
A time-height display shows samples of a 3D parameter along a vertical profile from top to bottom of the available data, with time as the independent coordinate (x-axis). You can choose between contour, color-filled, and color-shaded time-height displays. After creating the time-height display for the first variable, you can add a contour time-height display for a second variable. This setup allows for a more detailed and layered analysis of vertical atmospheric data over time. Additionally, you can now switch the vertical coordinate scale from meters to pressure in hPa, providing greater flexibility in interpreting the data.
We have updated the algorithm responsible for calculating the clip distance during zoom operations in both map and globe views. This enhancement ensures that users can now zoom in to street level without experiencing the disappearance of 3D objects, improving the overall user experience. The refined algorithm dynamically adjusts the clip distance based on zoom levels, allowing for seamless and detailed visualization at various scales. This update is particularly beneficial for users requiring precise close-up views in 3D environments, making the zoom functionality more reliable and effective.
Scalers are applied as linear transformers to standardize or normalize dataset distributions. The available scalers: Standard Scaler, Robust Scaler, and Min-Max Scaler—offer tailored solutions for different data types and characteristics. The Standard Scaler standardizes features by removing the mean and scaling to unit variance, which is effective for data that generally follows a normal distribution. The Robust Scaler, on the other hand, is ideal for datasets with outliers as it scales based on the median and interquartile range, reducing the influence of extreme values. Min-Max Scaler scales features to a specified range, typically [0, 1], which is helpful for algorithms that require normalized data within bounded values. These scaling options provide flexibility for preparing datasets and ensuring that feature distributions align with model assumptions and can improve algorithm performance.
Transformers provides non-linear transformations to a dataset into a representation that is more suitable for the downstream applications. The available transformers: Quantile Transformer, Power Transformer, and Normalizer tailored solutions for different data types and characteristics. Quantile Transformer provides non-linear transformations in which distances between marginal outliers and inliers are shrunk. Power transforms are a family of parametric transformations that aim to map data from any distribution to as close to a Gaussian distribution. These transformers add versatility to data preprocessing workflows, helping to align feature distributions with model assumptions, improving algorithmic performance in areas such as machine learning.
Data classification helps in organizing data into categories based on sensitivity, importance, or other criteria. In machine learning, classification involves assigning a class label to a given input data example. The Jython Classifier formula provides a simple method to label 2D gridded data values with numbers. This basic classification algorithm is used to sort data points into different classes, allowing machine learning applications to train on existing data and predict the classification of new data points.
A median filter is a non-linear digital filtering technique widely used in image processing to reduce noise. It works by replacing the center pixel in a neighborhood with the median value of that surrounding window. This technique is particularly effective at removing Gaussian, random, and salt-and-pepper noise from images. Due to its effectiveness, the median filter is commonly used in the data preprocessing stage of machine learning applications.
Applying spatial functions like Max, Min, Average, and Percentile over a 2D grid allows you to generate a new time series point field. By applying these functions to the 2D grid, you transform spatial data into a time series that reflects specific statistical properties, offering new insights into the data’s temporal dynamics and preprocessing the data of machine learning applications.
For a list of outstanding known problems, see the Known Problems page.