Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Sign in / Register
Toggle navigation
S
srvpro2
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Locked Files
Issues
0
Issues
0
List
Boards
Labels
Service Desk
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Security & Compliance
Security & Compliance
Dependency List
License Compliance
Packages
Packages
List
Container Registry
Analytics
Analytics
CI / CD
Code Review
Insights
Issues
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
nanahira
srvpro2
Commits
1696a557
Commit
1696a557
authored
Feb 16, 2026
by
nanahira
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
chatgpt
parent
e20a49e9
Changes
8
Hide whitespace changes
Inline
Side-by-side
Showing
8 changed files
with
427 additions
and
5 deletions
+427
-5
config.example.yaml
config.example.yaml
+9
-0
src/config.ts
src/config.ts
+16
-0
src/feats/chatgpt-service.ts
src/feats/chatgpt-service.ts
+389
-0
src/feats/feats-module.ts
src/feats/feats-module.ts
+3
-1
src/feats/index.ts
src/feats/index.ts
+1
-0
src/feats/reconnect/index.ts
src/feats/reconnect/index.ts
+4
-1
src/feats/reconnect/refresh-field-service.ts
src/feats/reconnect/refresh-field-service.ts
+1
-2
src/koishi/commands-service.ts
src/koishi/commands-service.ts
+4
-1
No files found.
config.example.yaml
View file @
1696a557
...
@@ -51,6 +51,15 @@ windbotBotlist: ./windbot/bots.json
...
@@ -51,6 +51,15 @@ windbotBotlist: ./windbot/bots.json
windbotSpawn
:
0
windbotSpawn
:
0
windbotEndpoint
:
http://127.0.0.1:2399
windbotEndpoint
:
http://127.0.0.1:2399
windbotMyIp
:
127.0.0.1
windbotMyIp
:
127.0.0.1
enableChatgpt
:
0
chatgptEndpoint
:
https://api.openai.com
chatgptToken
:
sk-xxxx
chatgptModel
:
gpt-4o-mini
chatgptSystemPrompt
:
你是{{windbot}},一名与{{player}}实时互动的游戏对手。玩家当前 locale 是
{{locale}},你必须始终使用 {{language}}
回复(不要混用其他语言)。你的回复应简短、有趣、贴合当前情境,增强玩家沉浸感。避免冗长解释或重复内容,并且每次回复不能超过100个字。
chatgptTokenLimit
:
12000
chatgptExtraOpts
:
{}
enableReconnect
:
1
enableReconnect
:
1
reconnectTimeout
:
180000
reconnectTimeout
:
180000
hidePlayerName
:
0
hidePlayerName
:
0
...
...
src/config.ts
View file @
1696a557
...
@@ -119,6 +119,22 @@ export const defaultConfig = {
...
@@ -119,6 +119,22 @@ export const defaultConfig = {
WINDBOT_ENDPOINT
:
'
http://127.0.0.1:2399
'
,
WINDBOT_ENDPOINT
:
'
http://127.0.0.1:2399
'
,
// Public IP/host that windbot uses to connect back to this server.
// Public IP/host that windbot uses to connect back to this server.
WINDBOT_MY_IP
:
'
127.0.0.1
'
,
WINDBOT_MY_IP
:
'
127.0.0.1
'
,
// Enable chatgpt feature for AI-room chat replies.
// Boolean parse rule (default false): ''/'0'/'false'/'null' => false, otherwise true.
ENABLE_CHATGPT
:
'
0
'
,
// Chat completions API endpoint. Format: URL string.
CHATGPT_ENDPOINT
:
'
https://api.openai.com
'
,
// Chat completions API token.
CHATGPT_TOKEN
:
'
sk-xxxx
'
,
// Chat model.
CHATGPT_MODEL
:
'
gpt-4o-mini
'
,
// Optional system prompt template. Supports {{player}} and {{windbot}} placeholders.
CHATGPT_SYSTEM_PROMPT
:
'
你是{{windbot}},一名与{{player}}实时互动的游戏对手。玩家当前 locale 是 {{locale}},你必须始终使用 {{language}} 回复(不要混用其他语言)。你的回复应简短、有趣、贴合当前情境,增强玩家沉浸感。避免冗长解释或重复内容,并且每次回复不能超过100个字。
'
,
// Token limit used to trim stored conversation context.
CHATGPT_TOKEN_LIMIT
:
'
12000
'
,
// Extra request options for chat completions. Format: JSON object string.
CHATGPT_EXTRA_OPTS
:
'
{}
'
,
// Enable reconnect feature.
// Enable reconnect feature.
// Boolean parse rule (default true): only '0'/'false'/'null' => false, otherwise true.
// Boolean parse rule (default true): only '0'/'false'/'null' => false, otherwise true.
// Note: with default-true parsing, empty string is treated as true.
// Note: with default-true parsing, empty string is treated as true.
...
...
src/feats/chatgpt-service.ts
0 → 100644
View file @
1696a557
import
{
YGOProCtosChat
,
NetPlayerType
}
from
'
ygopro-msg-encode
'
;
import
{
Context
}
from
'
../app
'
;
import
{
Client
}
from
'
../client
'
;
import
{
Room
,
RoomManager
}
from
'
../room
'
;
type
ChatgptMessage
=
{
role
:
'
system
'
|
'
user
'
|
'
assistant
'
;
content
:
string
;
};
type
ChatCompletionsResponse
=
{
choices
?:
Array
<
{
message
?:
{
content
?:
string
;
};
}
>
;
};
type
TiktokenEncoder
=
{
encode
(
text
:
string
):
{
length
:
number
};
};
declare
module
'
../room
'
{
interface
Room
{
isRequestingChatgpt
?:
boolean
;
chatgptConversation
?:
ChatgptMessage
[];
}
}
export
class
ChatgptService
{
private
logger
=
this
.
ctx
.
createLogger
(
this
.
constructor
.
name
);
private
roomManager
=
this
.
ctx
.
get
(()
=>
RoomManager
);
private
enabled
=
this
.
ctx
.
config
.
getBoolean
(
'
ENABLE_CHATGPT
'
);
private
endpoint
=
this
.
ctx
.
config
.
getString
(
'
CHATGPT_ENDPOINT
'
).
trim
();
private
token
=
this
.
ctx
.
config
.
getString
(
'
CHATGPT_TOKEN
'
).
trim
();
private
model
=
this
.
ctx
.
config
.
getString
(
'
CHATGPT_MODEL
'
).
trim
();
private
systemPrompt
=
this
.
ctx
.
config
.
getString
(
'
CHATGPT_SYSTEM_PROMPT
'
)
.
trim
();
private
tokenLimit
=
Math
.
max
(
0
,
this
.
ctx
.
config
.
getInt
(
'
CHATGPT_TOKEN_LIMIT
'
)
||
0
,
);
private
extraOptions
=
this
.
parseExtraOptions
(
this
.
ctx
.
config
.
getString
(
'
CHATGPT_EXTRA_OPTS
'
),
);
private
tiktokenUnavailable
=
false
;
private
tiktokenUnavailableLogged
=
false
;
private
tokenizerByModel
=
new
Map
<
string
,
TiktokenEncoder
>
();
private
tiktokenModulePromise
?:
Promise
<
any
>
;
constructor
(
private
ctx
:
Context
)
{
if
(
!
this
.
enabled
)
{
return
;
}
this
.
ctx
.
middleware
(
YGOProCtosChat
,
async
(
msg
,
client
,
next
)
=>
{
const
room
=
this
.
resolveChatRoom
(
client
);
if
(
!
room
)
{
return
next
();
}
const
content
=
(
msg
.
msg
||
''
).
trim
();
if
(
!
this
.
shouldRespond
(
client
,
room
,
content
))
{
return
next
();
}
if
(
room
.
isRequestingChatgpt
)
{
return
next
();
}
room
.
isRequestingChatgpt
=
true
;
void
this
.
requestChatgptAndReply
(
room
,
client
,
content
)
.
catch
((
error
)
=>
{
this
.
logger
.
error
(
{
roomName
:
room
.
name
,
clientName
:
client
.
name
,
error
:
(
error
as
Error
).
toString
(),
},
'
CHATGPT ERROR
'
,
);
})
.
finally
(()
=>
{
room
.
isRequestingChatgpt
=
false
;
});
return
next
();
});
}
async
init
()
{
if
(
!
this
.
enabled
||
this
.
tiktokenUnavailable
||
this
.
tiktokenModulePromise
)
{
return
;
}
const
moduleName
=
'
tiktoken
'
;
this
.
tiktokenModulePromise
=
import
(
moduleName
).
catch
(()
=>
{
this
.
tiktokenUnavailable
=
true
;
if
(
!
this
.
tiktokenUnavailableLogged
)
{
this
.
tiktokenUnavailableLogged
=
true
;
this
.
logger
.
warn
(
'
tiktoken is unavailable, using approximate token counting
'
,
);
}
return
undefined
;
});
}
private
resolveChatRoom
(
client
:
Client
)
{
if
(
!
client
.
roomName
)
{
return
undefined
;
}
const
room
=
this
.
roomManager
.
findByName
(
client
.
roomName
);
if
(
!
room
||
room
.
finalizing
)
{
return
undefined
;
}
return
room
;
}
private
shouldRespond
(
client
:
Client
,
room
:
Room
,
content
:
string
)
{
if
(
!
content
||
content
.
startsWith
(
'
/
'
))
{
return
false
;
}
if
(
!
this
.
enabled
||
!
room
.
windbot
)
{
return
false
;
}
if
(
client
.
isInternal
)
{
return
false
;
}
if
(
client
.
pos
>=
NetPlayerType
.
OBSERVER
)
{
return
false
;
}
return
true
;
}
private
async
requestChatgptAndReply
(
room
:
Room
,
client
:
Client
,
content
:
string
,
)
{
const
conversation
=
room
.
chatgptConversation
||
[];
const
requestMessages
:
ChatgptMessage
[]
=
[
...
conversation
,
{
role
:
'
user
'
,
content
,
},
];
let
trimStartIndex
=
0
;
if
(
this
.
systemPrompt
)
{
requestMessages
.
unshift
({
role
:
'
system
'
,
content
:
this
.
renderSystemPrompt
(
client
,
room
),
});
trimStartIndex
=
1
;
}
let
shrinkCount
=
0
;
while
(
!
(
await
this
.
isWithinTokenLimit
(
requestMessages
,
this
.
tokenLimit
))
&&
requestMessages
.
length
>
1
+
trimStartIndex
)
{
requestMessages
.
splice
(
trimStartIndex
,
2
);
shrinkCount
+=
2
;
}
const
requestBody
:
Record
<
string
,
unknown
>
=
{
messages
:
requestMessages
,
model
:
this
.
model
,
...
this
.
extraOptions
,
};
this
.
logger
.
debug
(
{
roomName
:
room
.
name
,
clientName
:
client
.
name
,
body
:
JSON
.
stringify
(
requestBody
),
},
'
CHATGPT REQUEST BODY
'
,
);
const
response
=
await
this
.
ctx
.
http
.
post
<
ChatCompletionsResponse
>
(
this
.
makeChatCompletionsUrl
(),
requestBody
,
{
timeout
:
300000
,
headers
:
{
Authorization
:
`Bearer
${
this
.
token
}
`
,
},
},
);
this
.
logger
.
debug
(
{
roomName
:
room
.
name
,
clientName
:
client
.
name
,
response
:
JSON
.
stringify
(
response
.
data
),
},
'
CHATGPT RESPONSE BODY
'
,
);
const
text
=
response
.
data
?.
choices
?.[
0
]?.
message
?.
content
?.
trim
();
if
(
!
text
)
{
return
;
}
await
this
.
sendReplyToRoom
(
room
,
client
,
text
);
if
(
shrinkCount
>
0
)
{
conversation
.
splice
(
0
,
shrinkCount
);
}
conversation
.
push
({
role
:
'
user
'
,
content
});
conversation
.
push
({
role
:
'
assistant
'
,
content
:
text
});
room
.
chatgptConversation
=
conversation
;
}
private
makeChatCompletionsUrl
()
{
const
base
=
this
.
endpoint
.
replace
(
/
\/
+$/
,
''
);
return
`
${
base
}
/v1/chat/completions`
;
}
private
renderSystemPrompt
(
client
:
Client
,
room
:
Room
)
{
const
player
=
client
.
name
||
'
Player
'
;
const
windbot
=
room
.
windbot
?.
name
||
'
AI
'
;
const
locale
=
client
.
getLocale
()
||
'
en-US
'
;
const
language
=
this
.
resolveLanguageByLocale
(
locale
);
return
this
.
systemPrompt
.
replace
(
/{{
\s
*player
\s
*}}/g
,
player
)
.
replace
(
/{{
\s
*windbot
\s
*}}/g
,
windbot
)
.
replace
(
/{{
\s
*locale
\s
*}}/g
,
locale
)
.
replace
(
/{{
\s
*language
\s
*}}/g
,
language
);
}
private
resolveLanguageByLocale
(
locale
:
string
)
{
const
normalized
=
locale
.
toLowerCase
();
if
(
normalized
.
startsWith
(
'
zh
'
))
return
'
Simplified Chinese
'
;
if
(
normalized
.
startsWith
(
'
en
'
))
return
'
English
'
;
if
(
normalized
.
startsWith
(
'
ja
'
))
return
'
Japanese
'
;
if
(
normalized
.
startsWith
(
'
ko
'
))
return
'
Korean
'
;
if
(
normalized
.
startsWith
(
'
es
'
))
return
'
Spanish
'
;
if
(
normalized
.
startsWith
(
'
fr
'
))
return
'
French
'
;
if
(
normalized
.
startsWith
(
'
de
'
))
return
'
German
'
;
if
(
normalized
.
startsWith
(
'
ru
'
))
return
'
Russian
'
;
if
(
normalized
.
startsWith
(
'
pt
'
))
return
'
Portuguese
'
;
if
(
normalized
.
startsWith
(
'
it
'
))
return
'
Italian
'
;
return
locale
;
}
private
async
sendReplyToRoom
(
room
:
Room
,
client
:
Client
,
text
:
string
)
{
const
chatType
=
this
.
resolveReplyChatType
(
room
,
client
);
for
(
const
line
of
text
.
split
(
'
\n
'
))
{
if
(
!
line
.
length
)
{
await
room
.
sendChat
(
'
'
,
chatType
);
continue
;
}
for
(
const
chunk
of
this
.
chunkLine
(
line
,
100
))
{
await
room
.
sendChat
(
chunk
,
chatType
);
}
}
}
private
resolveReplyChatType
(
room
:
Room
,
client
:
Client
)
{
const
duelPos
=
room
.
getIngameDuelPos
(
client
);
if
(
duelPos
===
0
||
duelPos
===
1
)
{
const
opponents
=
room
.
getIngameDuelPosPlayers
(
1
-
duelPos
);
const
firstOpponent
=
opponents
[
0
];
if
(
firstOpponent
)
{
return
room
.
getIngamePos
(
firstOpponent
);
}
}
return
room
.
getIngamePos
(
client
);
}
private
chunkLine
(
line
:
string
,
size
:
number
)
{
const
chars
=
Array
.
from
(
line
);
const
chunks
:
string
[]
=
[];
for
(
let
i
=
0
;
i
<
chars
.
length
;
i
+=
size
)
{
chunks
.
push
(
chars
.
slice
(
i
,
i
+
size
).
join
(
''
));
}
return
chunks
;
}
private
parseExtraOptions
(
raw
:
string
)
{
const
source
=
raw
.
trim
();
if
(
!
source
)
{
return
{};
}
try
{
const
parsed
=
JSON
.
parse
(
source
);
if
(
!
parsed
||
typeof
parsed
!==
'
object
'
||
Array
.
isArray
(
parsed
))
{
return
{};
}
return
parsed
as
Record
<
string
,
unknown
>
;
}
catch
(
error
)
{
this
.
logger
.
warn
(
{
error
:
(
error
as
Error
).
toString
()
},
'
Invalid CHATGPT_EXTRA_OPTS, fallback to empty object
'
,
);
return
{};
}
}
private
async
isWithinTokenLimit
(
messages
:
ChatgptMessage
[],
limit
:
number
)
{
if
(
!
limit
)
{
return
true
;
}
const
tokenCount
=
await
this
.
countTokens
(
messages
);
return
tokenCount
<=
limit
;
}
private
async
countTokens
(
messages
:
ChatgptMessage
[])
{
const
encoder
=
await
this
.
getTokenizer
(
this
.
model
);
if
(
!
encoder
)
{
return
this
.
estimateTokens
(
messages
);
}
try
{
let
tokens
=
2
;
for
(
const
message
of
messages
)
{
tokens
+=
4
;
tokens
+=
encoder
.
encode
(
message
.
role
).
length
;
tokens
+=
encoder
.
encode
(
message
.
content
).
length
;
}
return
tokens
;
}
catch
{
return
this
.
estimateTokens
(
messages
);
}
}
private
estimateTokens
(
messages
:
ChatgptMessage
[])
{
let
tokens
=
2
;
for
(
const
message
of
messages
)
{
tokens
+=
4
;
tokens
+=
Math
.
ceil
((
message
.
role
.
length
+
message
.
content
.
length
)
/
4
);
}
return
tokens
;
}
private
async
getTokenizer
(
model
:
string
)
{
if
(
this
.
tiktokenUnavailable
)
{
return
undefined
;
}
const
cached
=
this
.
tokenizerByModel
.
get
(
model
);
if
(
cached
)
{
return
cached
;
}
if
(
!
this
.
tiktokenModulePromise
)
{
await
this
.
init
();
}
try
{
const
module
=
await
this
.
tiktokenModulePromise
;
if
(
!
module
)
{
return
undefined
;
}
let
encoder
:
TiktokenEncoder
|
undefined
;
try
{
encoder
=
module
.
encoding_for_model
(
model
);
}
catch
{
encoder
=
module
.
get_encoding
(
'
cl100k_base
'
);
}
if
(
!
encoder
)
{
return
undefined
;
}
this
.
tokenizerByModel
.
set
(
model
,
encoder
);
return
encoder
;
}
catch
{
this
.
tiktokenUnavailable
=
true
;
if
(
!
this
.
tiktokenUnavailableLogged
)
{
this
.
tiktokenUnavailableLogged
=
true
;
this
.
logger
.
warn
(
'
tiktoken is unavailable, using approximate token counting
'
,
);
}
return
undefined
;
}
}
}
src/feats/feats-module.ts
View file @
1696a557
...
@@ -13,16 +13,18 @@ import { MenuManager } from './menu-manager';
...
@@ -13,16 +13,18 @@ import { MenuManager } from './menu-manager';
import
{
ClientKeyProvider
}
from
'
./client-key-provider
'
;
import
{
ClientKeyProvider
}
from
'
./client-key-provider
'
;
import
{
HidePlayerNameProvider
}
from
'
./hide-player-name-provider
'
;
import
{
HidePlayerNameProvider
}
from
'
./hide-player-name-provider
'
;
import
{
CommandsService
,
KoishiContextService
}
from
'
../koishi
'
;
import
{
CommandsService
,
KoishiContextService
}
from
'
../koishi
'
;
import
{
ChatgptService
}
from
'
./chatgpt-service
'
;
export
const
FeatsModule
=
createAppContext
<
ContextState
>
()
export
const
FeatsModule
=
createAppContext
<
ContextState
>
()
.
provide
(
ClientKeyProvider
)
.
provide
(
ClientKeyProvider
)
.
provide
(
HidePlayerNameProvider
)
.
provide
(
HidePlayerNameProvider
)
.
provide
(
KoishiContextService
)
.
provide
(
KoishiContextService
)
.
provide
(
CommandsService
)
.
provide
(
CommandsService
)
// some chat commands
.
provide
(
MenuManager
)
.
provide
(
MenuManager
)
.
provide
(
ClientVersionCheck
)
.
provide
(
ClientVersionCheck
)
.
provide
(
Welcome
)
.
provide
(
Welcome
)
.
provide
(
PlayerStatusNotify
)
.
provide
(
PlayerStatusNotify
)
.
provide
(
ChatgptService
)
// AI-room chat replies
.
provide
(
RefreshFieldService
)
.
provide
(
RefreshFieldService
)
.
provide
(
Reconnect
)
.
provide
(
Reconnect
)
.
provide
(
WaitForPlayerProvider
)
// chat refresh
.
provide
(
WaitForPlayerProvider
)
// chat refresh
...
...
src/feats/index.ts
View file @
1696a557
export
*
from
'
./client-version-check
'
;
export
*
from
'
./client-version-check
'
;
export
*
from
'
./client-key-provider
'
;
export
*
from
'
./client-key-provider
'
;
export
*
from
'
./chatgpt-service
'
;
export
*
from
'
./hide-player-name-provider
'
;
export
*
from
'
./hide-player-name-provider
'
;
export
*
from
'
./menu-manager
'
;
export
*
from
'
./menu-manager
'
;
export
*
from
'
./welcome
'
;
export
*
from
'
./welcome
'
;
...
...
src/feats/reconnect/index.ts
View file @
1696a557
...
@@ -509,7 +509,10 @@ export class Reconnect {
...
@@ -509,7 +509,10 @@ export class Reconnect {
}),
}),
);
);
await
this
.
refreshFieldService
.
sendReconnectDuelingMessages
(
newClient
,
room
);
await
this
.
refreshFieldService
.
sendReconnectDuelingMessages
(
newClient
,
room
,
);
}
}
private
importClientData
(
newClient
:
Client
,
oldClient
:
Client
,
room
:
Room
)
{
private
importClientData
(
newClient
:
Client
,
oldClient
:
Client
,
room
:
Room
)
{
...
...
src/feats/reconnect/refresh-field-service.ts
View file @
1696a557
...
@@ -36,8 +36,7 @@ export class RefreshFieldService {
...
@@ -36,8 +36,7 @@ export class RefreshFieldService {
await
client
.
send
(
await
this
.
requestField
(
room
));
await
client
.
send
(
await
this
.
requestField
(
room
));
await
this
.
sendRefreshMessages
(
client
,
room
);
await
this
.
sendRefreshMessages
(
client
,
room
);
const
needResendRequest
=
const
needResendRequest
=
this
.
isReconnectingPlayerOperating
(
client
,
room
);
this
.
isReconnectingPlayerOperating
(
client
,
room
);
if
(
needResendRequest
)
{
if
(
needResendRequest
)
{
const
lastHint
=
this
.
findLastHintForClient
(
client
,
room
);
const
lastHint
=
this
.
findLastHintForClient
(
client
,
room
);
...
...
src/koishi/commands-service.ts
View file @
1696a557
...
@@ -34,7 +34,10 @@ export class CommandsService {
...
@@ -34,7 +34,10 @@ export class CommandsService {
if
(
!
commandContext
)
{
if
(
!
commandContext
)
{
return
;
return
;
}
}
await
this
.
ctx
.
dispatch
(
new
YGOProCtosSurrender
(),
commandContext
.
client
);
await
this
.
ctx
.
dispatch
(
new
YGOProCtosSurrender
(),
commandContext
.
client
,
);
});
});
koishi
.
command
(
'
roomname
'
,
''
).
action
(({
session
})
=>
{
koishi
.
command
(
'
roomname
'
,
''
).
action
(({
session
})
=>
{
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment