In-Memory Versus Disk-Based Computing with Random Forest for Stock Analysis: A Comparative Study

Chitra Joshi, Chitrakant Banchorr, Omkaresh Kulkarni, Kirti Wanjale

In-Memory Versus Disk-Based Computing with Random Forest for Stock Analysis: A Comparative Study

Číslo: 3/2025
Periodikum: Acta Informatica Pragensia
DOI: 10.18267/j.aip.275

Klíčová slova: Apache Spark; MapReduce; Big data; Random forest; Performance comparison; Data processing; In-memory processing; Disk-based processing

Pro získání musíte mít účet v Citace PRO.

Přečíst po přihlášení

Anotace: Background: The advancement of big data analytics calls for careful selection of processing frameworks to optimize machine learning effectiveness. Choosing the appropriate framework can significantly influence the speed and accuracy of data analysis, ultimately leading to more informed decision making. In adapting to this changing landscape, businesses should focus on factors such as how well a system scales, how easily it can be used and how effectively it integrates with their existing tools. The effectiveness of these frameworks plays a crucial role in determining data processing speed, model training efficiency and predictive accuracy. As data become increasingly large, diverse and fast-moving, conventional processing systems often fall short of the performance required for modern analytics.

Objective: This research seeks to thoroughly assess the performance of two prominent big data processing frameworks—Apache Spark (in-memory computing) and MapReduce (disk-based computing)—with a focus on applying random forest algorithms to predict stock prices. The primary objective is to assess and compare their effectiveness in handling large-scale financial datasets, focusing on key aspects such as predictive accuracy, processing speed and scalability.

Methods: The investigation uses the MapReduce methodology and Apache Spark independently to analyse a substantial stock price dataset and to train a random forest regressor. Mean squared error (MSE) and root mean square error (RMSE) were employed to assess the primary performance indicators of the models, while mean absolute error (MAE) and the R-squared value were used to evaluate the goodness of fit of the models.

Results: The RMSE, MAE and MSE obtained for the Spark-based implementation were lower, compared to the MapReduce-based implementation, although these low values indicate high prediction accuracy. It also had a big impact on the time it took to train and run models because of its optimized in-memory processing. As opposed to this, the MapReduce approach had higher latency and lower accuracy, reflecting its disk-based constraints and reduced efficiency for iterative machine learning tasks.

Conclusion: The conclusion supports the fact that Spark is the better option for complex machine learning tasks such as stock price prediction, as it is good for handling large amounts of data. MapReduce is still a reliable framework but not fast enough to process and not lightweight enough for analytics that are too rapid and iterative. The outcomes of this study are helpful for data scientists and financial analysts to choose the most appropriate framework for big data machine learning applications.