مدیریت مصرف انرژی خانگی با استفاده از یادگیری تقویتی چندعاملی

نویسندگان

دانشکده مهندسی برق و کامپیوتر، دانشگاه شیراز

چکیده

افزایش مصرف انرژی الکتریکی، مسئله‌ای است که همواره به‌عنوان یکی از چالش‌های تأمین‌کنندگان برق مطرح بوده است. به‌دنبال افزایش مصرف، برنامه‌های پاسخ‌گویی بار که سعی در مدیریت مصرف انرژی با اهدافی نظیرکاهش هزینه‌ها و افزایش قابلیت اطمینان دارند، بیش از پیش مورد توجه قرار گرفته‌اند. از طرفی هوشمندسازی مصرف‌کنندگان، امکان بهره‌گیری هرچه بیشتر از هوش مصنوعی برای مدیریت انرژی را میسر ساخته است. این مقاله روشی برای مدیریت مصرف انرژی خانگی با هدف کمینه کردن قبض برق و نارضایتی مشترک ارائه می‌دهد. با تفکیک بارهای خانه به سه دسته بار‌های غیرقابل کنترل، قابل جابه‌جایی و قابل کنترل، یادگیری تقویتی چندعاملی با الگوریتم Q-Learning راهکاری است که در این مقاله برای اتخاذ تصمیمات بهینه دربارۀ هریک از وسایل خانه در نظر گرفته شده است. به‌دلیل ماهیت الگوریتم Q-Learning، روش پیشنهادی در این مقاله برخلاف روش‌های برنامه‌ریزی عدد صحیح امکان افزودن وسایل بیشتری از خانه و حل مسئله‌های پیچیده‌تری را داراست. پیاده‌سازی روش پیشنهادی این مقاله در بخش مطالعۀ عددی منجر به کاهش قبض برق مشترک تا 8/24% گردید. همچنین، نتایج حاصل از اعمال روش ارائه‌شده حاکی از صحت عملکرد آن است.

کلیدواژه‌ها


[1] U. E. I. Administration, "International energy outlook 2017", [Online]. Available: https://www.eia.gov/outlooks/ieo/. [2] Wen, Z., O’Neill, D. and Maei, H., "Optimal demand response using device-based reinforcement learning", in IEEE Transactions on Smart Grid, Vol. 6, No. 5, pp. 2312-2324, 2015. [3] Yu, L., et al., "Deep Reinforcement learning for smart home energy management", in IEEE Internet of Things Journal, Vol. 7, No. 4, pp. 2751-2762, 2020. [4] Wan, Z., Li, H. and He, H., "Residential energy management with deep reinforcement learning", 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, pp. 1-7, 2018. [5] Wang, B., Li, Y., Ming, W. and Wang, S., "Deep reinforcement learning method for demand response management of interruptible load", in IEEE Transactions on Smart Grid, Vol. 11, No. 4, pp. 3146-3155, 2020. [6] Lu, R., Ho Hong, S., "Incentive based demand response for smart grid with reinforcement learning and deep neural network", in Applied Energy, Vol. 236, pp. 937-949, 2019. [7] Liu, Y., Zhang, D. and Gooi, H, B., "Optimization strategy based on deep reinforcement learning for home energy management", in CSEE Journal of Power and Energy Systems, Vol. 6, No. 3, pp. 572-582, 2020. [8] Ruelens, F., et al., "Direct load control of thermostatically controlled loads based on sparse observations using deep reinforcement learning”, CSEE Journal of Power and Energy Systems, Vol. 5, pp. 423–432, 2019. [9] Zhang, D., Han, X. and Deng. C., "Review on the research and practice of deep learning and reinforcement learning in smart grids", in CSEE Journal of Power and Energy Systems, Vol. 4, No. 3, pp. 362-370, 2018. [10] Huang, X., Hong, S, H., Yu, M., Ding. Y. and Jiang, J., "Demand response management for industrial facilities: a deep reinforcement learning approach", in IEEE Access, Vol. 7, pp. 82194-82205, 2019. [11] Lu, R., Hog, S, H. and Zhang, X., "A dynamic pricing demand response algorithm for smart grid: reinforcement learning approach", in Applied Energy, Vol. 220, pp. 220-230, 2018. [12] Du, Y. and Li, F., "Intelligent multi-microgrid energy management based on deep neural network and model-free reinforcement learning", in IEEE Transactions on Smart Grid, Vol. 11, No. 2, pp. 1066-1076, 2020. [13] Mocanu, E., et al., "On-Line building energy optimization using deep reinforcement learning", in IEEE Transactions on Smart Grid, Vol. 10, No. 4, pp. 3698-3708, 2019. [14] Ye, Y., Qiu, D., Wu, X., G, Strbac and Ward, J., "Model-Free real-time autonomous control for a residential multi-energy system using deep reinforcement learning", in IEEE Transactions on Smart Grid, Vol. 11, No. 4, pp. 3068-3082, 2020. [15] Li, H., Wan, Z. and He, H., "A deep reinforcement learning based approach for home energy management system", 2020 IEEE Power & Energy Society Innovative Smart Grid Technologies Conference (ISGT), Washington, DC, USA, pp. 1-5, 2020. [16] Mohammadi, M., Al-Fuqaha, A., Guizani, M., and Oh, J., "Semisupervised deep reinforcement learning in support of iot and smart city services", in IEEE Internet of Things Journal, Vol. 5, No. 2, pp. 624-635, 2018. [17] Tai, C., Hong, J. and Fu, L., "A real-time demand-side management system considering user behavior using deep q-learning in home area network", 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, pp. 4050-4055, 2019. [18] Alfaverh, F., Denaï, M. and Sun, Y., "Demand response strategy based on reinforcement learning and fuzzy reasoning for home energy management", in IEEE Access, Vol. 8, pp. 39310-39321, 2020. [19] Al-jabery, K., Wunsch, D, C., Xiong, J. and Shi, Y., "A novel grid load management technique using electric water heaters and Q-learning", 2014 IEEE International Conference on Smart Grid Communications (SmartGridComm), Venice, pp. 776-781, 2014. [20] Al-jabery, K., et al., "Demand-Side management of domestic electric water heaters using approximate dynamic programming", in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 36, No. 5, pp. 775-788, 2017. [21] Al-jabery. K., Xu. Z., Yu. W., Wunsch. D. C, Xiong. J. and Shi. Y., "Demand-Side management of domestic electric water heaters using approximate dynamic programming", in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 36, No. 5, pp. 775-788, 2017. [22] Ruelens, F., et al., "Reinforcement learning applied to an electric water heater: from theory to practice", in IEEE Transactions on Smart Grid, Vol. 9, No. 4, pp. 3792-3800, 2018. [23] Patyn, C., Ruelens, F. and Deconinck, G., "Comparing neural architectures for demand response through model-free reinforcement learning for heat pump control", 2018 IEEE International Energy Conference (ENERGYCON), Limassol, pp. 1-6, 2018. [24] Al-jabery. K., Xu. Z., Yu. W., Wunsch. D. C, Xiong. J. and Shi. Y., "A novel grid load management technique using electric water heaters and Q-learning", 2014 IEEE International Conference on Smart Grid Communications (SmartGridComm), Venice, pp. 776-781, 2014. [25] Xu, X., et al., "A Multi-Agent reinforcement learning-based data-driven method for home energy management", in IEEE Transactions on Smart Grid, Vol. 11, No. 4, pp. 3201-3211, 2020. [26] Vázquez-Canteli, R J. and Nagy, Z., "Reinforcement learning for demand response: a review of algorithms and modeling techniques", in Applied Energy, Vol. 235, pp. 1072-1089, 2019. [27] Sutton, S, R. and Barto, A., Introduction to reinforcement learning (No. 4). MIT press Cambridge, 1998. [28] Mason, K. and Grijalva, S., "A review of reinforcement learning for autonomous building energy management", in Comput Electr Eng, Vol. 78, pp. 300–12, 2019. [29] Mathew, A., Roy, A. and Mathew, J., "Intelligent residential energy management system using deep reinforcement learning", in IEEE Systems Journal, 2020. [30] Wu, Z., Zhou, S., Li, J. and Zhang, X., "Real-Time scheduling of residential appliances via conditional risk-at-value", in IEEE Transactions on Smart Grid, Vol. 5, No. 3, pp. 1282-1291, 2014. [31] Lu, R., Hong, S, H. and Yu, M., "Demand Response for home energy management using reinforcement learning and artificial neural network", in IEEE Transactions on Smart Grid, Vol. 10, No. 6, pp. 6629-6639, 2019. [32] Sun, B., Huang, Z., Tan, X. and Tsang, H, K, D., "Optimal scheduling for electric vehicle charging with discrete charging levels in distribution grid", IEEE Transactions on Smart Grid, Vol. 9, No. 2, pp. 624–634, 2018. [33] Wan, Z., Li, H., He, H. and Prokhorov, D., "Model-Free real-time ev charging scheduling based on deep reinforcement learning", in IEEE Transactions on Smart Grid, Vol. 10, No. 5, pp. 5246-5257, 2019. [34] http://www.energyonline.com/Data/GenericData.aspx?DataId=4