{"id":88595,"date":"2024-10-02T17:47:49","date_gmt":"2024-10-02T15:47:49","guid":{"rendered":"https:\/\/intercoaching.fr\/?p=88595"},"modified":"2024-10-02T19:31:39","modified_gmt":"2024-10-02T17:31:39","slug":"exploring-the-risks-of-ai-based-on-large-language-models-diving-into-the-dark-side-of-linguistic-computation","status":"publish","type":"post","link":"https:\/\/intercoaching.fr\/en\/exploring-the-risks-of-ai-based-on-large-language-models-diving-into-the-dark-side-of-linguistic-computation\/","title":{"rendered":"Exploring the risks of AI based on large language models: diving into the dark side of linguistic computation"},"content":{"rendered":"<h2 class=\"wp-block-heading\">Examining the perils associated with large language models<\/h2>\n\n\n<p>Language models developed for artificial intelligence always have vulnerabilities when it comes to their use in malicious contexts.<\/p>\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\">\n<div class=\"wp-block-embed__wrapper\">\n<iframe title=\"LES 5 RISQUES LIES AU CONTROLE INTERNE\" width=\"1200\" height=\"675\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/7HVdCMB73h0?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div>\n<\/figure>\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\nhttps:\/\/twitter.com\/pyoudeyer\/status\/1727719742855634980\n<\/div><\/figure>\n\n\n<h3 class=\"wp-block-heading\">Vulnerability to malicious exploitation<\/h3>\n\n\n<p>The progress made in the field of artificial intelligence leaves no one indifferent: it represents progress for some and a source of concern for others. Large Language Models (LLMs), like all technologies, can be abused or misused. Technological advances have made it easier to develop more targeted and sophisticated phishing attacks. Julian Hazell\u2019s cybersecurity work has proven this, showing that fraudulent content created by tools like ChatGPT can be dangerously convincing.<\/p>\n\n\n<p>MIT experts have also highlighted the ease with which these models could contribute to the creation of harmful biological agents. LLMs can mistakenly include confidential data in their knowledge bases, and this can be exposed if specific requests are made to virtual assistants.<\/p>\n\n\n<h3 class=\"wp-block-heading\">Increasing risks of misuse<\/h3>\n\n\n<p>Since the launch of ChatGPT, the use of LLMs has expanded, and with it, their misuse by malicious individuals. Examples like those of FraudGPT and WormGPT, models specialized in fraud, illustrate this worrying trend. It appears that the companies behind these models, including OpenAI, have yet to develop measures to prevent their use for nefarious purposes. Even systems that are supposed to be secure can be bypassed relatively easily and inexpensively.<\/p>\n\n\n<h3 class=\"wp-block-heading\">Solutions to counter the phenomenon<\/h3>\n\n\n<ul class=\"wp-block-list\">\n\n<li>Ericom offers solutions to isolate sensitive data and protect it from exposure to potentially harmful AI.<\/li>\n\n\n<li>Menlo Security focuses on securing browsers to prevent exposure to malware and data loss.<\/li>\n\n<\/ul>\n\n\n<p>Despite the efforts of some industry leaders such as Google to mitigate these vulnerabilities, the lack of consensus within OpenAI and the rapid evolution of GPT models make the balance between innovation and security particularly difficult to find and maintain .<\/p>\n\n\n<p>In summary, although artificial intelligence presents us with a promising technological horizon, its recent developments confront us with a complex and potentially dangerous reality, requiring increased vigilance and security intervention.<\/p>\n\n\n\n<div class=\"kk-star-ratings kksr-auto kksr-align-right kksr-valign-bottom\"\n    data-payload='{&quot;align&quot;:&quot;right&quot;,&quot;id&quot;:&quot;88595&quot;,&quot;slug&quot;:&quot;default&quot;,&quot;valign&quot;:&quot;bottom&quot;,&quot;ignore&quot;:&quot;&quot;,&quot;reference&quot;:&quot;auto&quot;,&quot;class&quot;:&quot;&quot;,&quot;count&quot;:&quot;0&quot;,&quot;legendonly&quot;:&quot;&quot;,&quot;readonly&quot;:&quot;&quot;,&quot;score&quot;:&quot;0&quot;,&quot;starsonly&quot;:&quot;&quot;,&quot;best&quot;:&quot;5&quot;,&quot;gap&quot;:&quot;5&quot;,&quot;greet&quot;:&quot;Notez cet article&quot;,&quot;legend&quot;:&quot;0\\\/5 - (0 votes)&quot;,&quot;size&quot;:&quot;24&quot;,&quot;title&quot;:&quot;Exploring the risks of AI based on large language models: diving into the dark side of linguistic computation&quot;,&quot;width&quot;:&quot;0&quot;,&quot;_legend&quot;:&quot;{score}\\\/{best} - ({count} {votes})&quot;,&quot;font_factor&quot;:&quot;1.25&quot;}'>\n            \n<div class=\"kksr-stars\">\n    \n<div class=\"kksr-stars-inactive\">\n            <div class=\"kksr-star\" data-star=\"1\" style=\"padding-right: 5px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 24px; height: 24px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"2\" style=\"padding-right: 5px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 24px; height: 24px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"3\" style=\"padding-right: 5px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 24px; height: 24px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"4\" style=\"padding-right: 5px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 24px; height: 24px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" data-star=\"5\" style=\"padding-right: 5px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 24px; height: 24px;\"><\/div>\n        <\/div>\n    <\/div>\n    \n<div class=\"kksr-stars-active\" style=\"width: 0px;\">\n            <div class=\"kksr-star\" style=\"padding-right: 5px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 24px; height: 24px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 5px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 24px; height: 24px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 5px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 24px; height: 24px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 5px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 24px; height: 24px;\"><\/div>\n        <\/div>\n            <div class=\"kksr-star\" style=\"padding-right: 5px\">\n            \n\n<div class=\"kksr-icon\" style=\"width: 24px; height: 24px;\"><\/div>\n        <\/div>\n    <\/div>\n<\/div>\n                \n\n<div class=\"kksr-legend\" style=\"font-size: 19.2px;\">\n            <span class=\"kksr-muted\">Rate this article<\/span>\n    <\/div>\n    <\/div>","protected":false},"excerpt":{"rendered":"","protected":false},"author":3,"featured_media":84452,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_seopress_robots_primary_cat":"","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","_seopress_analysis_target_kw":"","_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","_glsr_average":0,"_glsr_ranking":0,"_glsr_reviews":0,"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[2249],"tags":[2311,4028,4035,4032],"class_list":["post-88595","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-news-en","tag-artificial-intelligence-en","tag-dark-side-of-linguistic-computation-en","tag-major-language-models-en","tag-risk-exploration-en","infinite-scroll-item","masonry-post","generate-columns","tablet-grid-50","mobile-grid-100","grid-parent","grid-33"],"acf":[],"jetpack_featured_media_url":"https:\/\/intercoaching.fr\/wp-content\/uploads\/2024\/01\/Exploration-des-risques-lies-aux-IA-basees-sur-les-grands-modeles-de-langage-plongee-dans-le-cote-obscur-de-la-computation-linguistique.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/posts\/88595","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/comments?post=88595"}],"version-history":[{"count":1,"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/posts\/88595\/revisions"}],"predecessor-version":[{"id":88596,"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/posts\/88595\/revisions\/88596"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/media\/84452"}],"wp:attachment":[{"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/media?parent=88595"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/categories?post=88595"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/intercoaching.fr\/en\/wp-json\/wp\/v2\/tags?post=88595"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}