Keycloack:用户可以通过登录到另一个领域来访问领域

时间:2021-04-21 16:34:44

标签: keycloak openresty

我有一个到 keycloack 服务器的 nginx/openresty 客户端,用于使用 openid 进行授权。 我正在使用 lua-resty-openidc 来允许访问代理背后的服务。

我在两个不同领域为不同的服务创建了两个客户端。

问题是在用户在第一个领域获得身份验证后,例如https://<my-server>/auth/realms/<realm1>/protocol/openid-connect/auth?response_type=code&client_id=openresty&state=...........,他也可以直接访问 realm2 的其他服务。

这里发生了什么?我如何确保用户只能在他进行身份验证的领域访问客户端?

如何确保用户在注销后在重新登录之前无法再访问?

[编辑-详细信息] 我的两个服务的 nginx.conf 如下。 用户首先访问 https://<my-server>/service_1/ 并被重定向到 keycloack 以提供他的realm1 密码。他提供并能够访问 service_1。

但是,如果此后他尝试访问 https://<my-server>/service_2/,他将不再需要进行身份验证,但可以登录,尽管 service_2 是关于不同领域的客户端,具有不同的 client_secret!

..... 位置 /service_1/ {

    access_by_lua_block {
        local opts = {
            redirect_uri_path = "/service_1/auth", -- we are send here after auth
            discovery = "https://<my-server>/keycloak/auth/realms/realm1/.well-known/openid-configuration",
            client_id = "openresty",
            client_secret = "<client1-secret>",
            session_contents = {id_token=true} -- this is essential for safari!
        }
        -- call introspect for OAuth 2.0 Bearer Access Token validation
        local res, err = require("resty.openidc").authenticate(opts)

        if err then
            ngx.status = 403
            ngx.say(err)
            ngx.exit(ngx.HTTP_FORBIDDEN)
        end
    }

    # I disabled caching so the browser won't cache the site.
    expires           0;
    add_header        Cache-Control private;

    proxy_pass http://<server-for-service1>:port1/foo/;
    proxy_set_header Host $http_host;

    proxy_http_version 1.1;
    proxy_redirect off;
    proxy_buffering off;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";

}

.................................

location /service_2/ {

    access_by_lua_block {
        local opts = {
            redirect_uri_path = "/service_2/auth", -- we are send here after auth
            discovery = "https://<my-server>/keycloak/auth/realms/realm2/.well-known/openid-configuration",
            client_id = "openresty",
            client_secret = "client2-secret",
            session_contents = {id_token=true} -- this is essential for safari!
        }
        -- call introspect for OAuth 2.0 Bearer Access Token validation
        local res, err = require("resty.openidc").authenticate(opts)

        if err then
            ngx.status = 403
            ngx.say(err)
            ngx.exit(ngx.HTTP_FORBIDDEN)
        end
    }

    # I disabled caching so the browser won't cache the site.
    expires           0;
    add_header        Cache-Control private;

    proxy_pass http://<server-for-service2>:port2/bar/;
    proxy_set_header Host $http_host;

    proxy_http_version 1.1;
    proxy_redirect off;
    proxy_buffering off;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";

}

[编辑细节2]

我使用的是 lua resty openidc 1.7.2 版,但根据两个版本代码的差异,我写的所有内容都应该代表 1.7.4。

我可以从调试级别的日志中清楚地看到会话是在第一次访问时创建的,然后在第二个领域中重用,这是错误的,因为第二次访问仍然有第一个领域的令牌......这里realm2 的授权是什么样的...:

2021/04/28 12:56:41 [debug] 2615#2615: *4617979 [lua] openidc.lua:1414: authenticate(): session.present=true, session.data.id_token=true, session.data.authenticated=true, opts.force_reauthorize=nil, opts.renew_access_token_on_expiry=nil, try_to_renew=true, token_expired=false
2021/04/28 12:56:41 [debug] 2615#2615: *4617979 [lua] openidc.lua:1470: authenticate(): id_token={"azp":"realm1","typ":"ID","iat":1619614598,"iss":"https:\/\/<myserver>\/keycloak\/auth\/realms\/realm1","aud":"realm1","nonce":"8c8ca2c4df2...b26"
,"jti":"1c028c65-...0994f","session_state":"0e1241e3-66fd-4ca1-a0dd-c0d1a6a5c708","email_verified":false,"sub":"25303e44-...e2c1757ae857","acr":"1","preferred_username":"logoutuser","auth_time":1619614598,"exp":1619614898,"at_hash":"5BNT...j414r72LU6g"}

1 个答案:

答案 0 :(得分:0)

好的,这花了我一些时间。也可能是大多数教程都留下了这种漏洞(仅在单个 nginx 使用多个领域的设置中),一个领域将允许对任何其他领域进行身份验证访问。

来自教程的典型身份验证调用是:

location /service1/ {

    access_by_lua_block {
        local opts = {
            redirect_uri_path = "/realm1/authenticated",
            discovery = "https://<myserver>/keycloak/auth/realms/realm1/.well-known/openid-configuration",
            client_id = "client1",
            client_secret = <........>,
            session_contents = {id_token=true} -- this is essential for safari!
        }
        -- call introspect for OAuth 2.0 Bearer Access Token validation
        local res, err = require("resty.openidc").authenticate(opts)

        if err then
            ngx.status = 403
            ngx.say(err)
            ngx.exit(ngx.HTTP_FORBIDDEN)
        end
    }

    # I disbled caching so the browser won't cache the site.
    expires           0;
    add_header        Cache-Control private;

    proxy_pass http://realm1-server:port/service1/;
    proxy_set_header Host $http_host;

    proxy_http_version 1.1;
    proxy_redirect off;
    proxy_buffering off;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";

}
location /service2/ {
 <same for ream2> 
}

实际上似乎有两个问题

  1. 我们不检查领域 ID(这是一个漏洞)
  2. 两个领域的会话可互换缓存(这将导致以下情况:如果我们修复 (1),我们现在只能访问一个领域,并且必须从领域 1 注销才能访问领域 2)

解决方案: 1)我们需要明确检查领域是否正确 2)我们应该为每个领域使用一个会话表(请注意,虽然这似乎也可以解决(1),但如果攻击者将会话 ID 与他的“特殊”浏览器混合并匹配,则不会 - 至少我认为)< /p>

对于 no2 没有文档,我必须阅读 openidc.lua 的代码,然后从那里读取此库使用的库的代码 (session.lua)

变化如下: 位置 /service1/ {

    access_by_lua_block {
        local opts = {
            redirect_uri_path = "/realm1/authenticated",
            discovery = "https://<myserver>/keycloak/auth/realms/realm1/.well-known/openid-configuration",
            client_id = "client1",
            client_secret = <........>,
            session_contents = {id_token=true} -- this is essential for safari!
        }
        -- call introspect for OAuth 2.0 Bearer Access Token validation
        local res, err = require("resty.openidc").authenticate(opts,nil,nil,{name=opts.client_id})

        if (err or ( res.id_token.azp ~= opts.client_id ) ) then
            ngx.status = 403
            ngx.say(err)
            ngx.exit(ngx.HTTP_FORBIDDEN)
        end
    }
    <..................no changes here................>
}
location /service2/ {
 <same for ream2> 
}