Skip to content

Commit a2b7c2a

Browse files
committed
Add autoUpdate and create database at initial run
1 parent a72e212 commit a2b7c2a

10 files changed

+364
-157
lines changed

README.md

+26-12
Original file line numberDiff line numberDiff line change
@@ -113,12 +113,14 @@ Note that as far as possible, the same field names as in `geoip-lite` are used,
113113

114114
### Setup the configuration
115115

116-
You can configure the api by 3 way
117-
`ILA_FIELDS=latitude,longitude` in CLI parameter or
118-
`ILA_FIELDS=latitude,longitude` as environment variables or
119-
`await reload({fields: 'latitude,longitude'})`.
116+
You can configure the api by 3 way.
117+
- CLI parameters: `ILA_FIELDS=latitude,longitude`
118+
- Environment variables: `ILA_FIELDS=latitude,longitude`
119+
- Javascript: `await reload({fields: 'latitude,longitude'})` .
120+
120121
The name of CLI prameter and environment variables are same.
121122

123+
122124
Conf key in `reload(conf)` is named with "LOWER CAMEL", CLI or ENV parameter is named with "SNAKE" with adding "ILA_" (come from Ip-Location-Api).
123125

124126
| `reload(conf)` | CLI or ENV | default | description |
@@ -132,28 +134,40 @@ Conf key in `reload(conf)` is named with "LOWER CAMEL", CLI or ENV parameter is
132134
| licenseKey | ILA_LICENSE_KEY | redist | By setting [MaxMind](https://www.maxmind.com/) License key, you can download latest version of database from [MaxMind](https://www.maxmind.com/) server. By setting to "redist", you can download the database from [node-geolite2-redist](https://github.com/sapics/node-geolite2-redist) repository which re-distribute the GeoLite2 database. |
133135
| ipLocationDb | ILA_IP_LOCATION_DB | | When you need only "country" field, you can use [ip-location-db](https://github.com/sapics/ip-location-db) data |
134136
| downloadType | ILA_DOWNLOAD_TYPE | reuse | By setting to "false", "tmpDataDir" directory is deleted every update. "reuse" dose not delete "tmpDataDir" and re-use "tmpDataDir"'s database if the database file dose not update. |
137+
| autoUpdate | ILA_AUTO_UPDATE | default | By setting to "false", it dose not update automatically. "default" updates twice weekly. You can set CRON PATTERN FORMAT which is provided by [cron](https://github.com/kelektiv/node-cron) with UTC timezone (For example, ILA_AUTO_UPDATE="0 1 * * *" for daily update). |
135138
| multiDbDir | ILA_MULTI_DB_DIR | false | If you use multiple "dataDir", please make this value to "true" |
136139
| series | ILA_SERIES | GeoLite2 | By setting to "GeoIP2", you can use premium database "GeoIP2" |
137140
| language | ILA_LANGUAGE | en | You can choose "de", "en", "es", "fr", "ja", "pt-BR", "ru", "zh-CN". By changing, the language of "region1_name", "region2_name", "city" fields are changed |
138141

139142

140143
### Update database
141144

142-
You can update the database by two way.
143-
First is `await updateDb()` which is the recommended one, because api's in-memory database is auto reloaded after database update.
144-
Second is `watchDb()` and CLI command `npm run updatedb`.
145-
The CLI command update the database and `watchDb` reload api's in-memory database by watching the database directory's change ("dataDir").
145+
```javascript
146+
import { updateDb } from 'ip-location-api'
147+
await updateDb(setting)
148+
```
149+
150+
or
151+
152+
```bash
153+
npm run updatedb
154+
```
146155

147156
There are three database update way, "ILA_LICENSE_KEY=redist" or "ILA_LICENSE_KEY=YOUR_GEOLITE2_LICENSE_KEY" or "ILA_IP_LOCATION_DB=YOUR_CHOOSEN_DATABSE".
148157

149-
When you set "ILA_LICENSE_KEY=redist", you can download GeoLite2 database from redistribution repository [node-geolite2-redist](https://github.com/sapics/node-geolite2-redist).
158+
When you set "ILA_LICENSE_KEY=redist", it downloads GeoLite2 database from the redistribution repository [node-geolite2-redist](https://github.com/sapics/node-geolite2-redist).
150159

151-
YOUR_GEOLITE2_LICENSE_KEY should be replaced by a valid GeoLite2 license key. Please [follow instructions](https://dev.maxmind.com/geoip/geoip2/geolite2/) provided by MaxMind to obtain a license key.
160+
When you set "ILA_LICENSE_KEY=YOUR_GEOLITE2_LICENSE_KEY", it downloads GeoLite2 dastabase from the MaxMind provided server.
161+
`YOUR_GEOLITE2_LICENSE_KEY` should be replaced by a valid GeoLite2 license key. Please [follow instructions](https://dev.maxmind.com/geoip/geoip2/geolite2/) provided by MaxMind to obtain a license key.
152162

163+
When you set "ILA_IP_LOCATION_DB=YOUR_CHOOSEN_DATABSE", it downloads from the [ip-location-db](https://github.com/sapics/ip-location-db) (country type only).
153164
You can "YOUR_CHOOSEN_DATABSE" from [ip-location-db](https://github.com/sapics/ip-location-db) with country type. For example, "geolite2-geo-whois-asn" is wider IP range country database which is equivalent to GeoLite2 database result for GeoLite2 country covered IP range and geo-whois-asn-country for the other IP range.
154165
The other example, "geo-whois-asn" is [CC0 licensed database](https://github.com/sapics/ip-location-db/tree/main/geo-asn-country), if you are unable to apply the GeoLite2 License.
155166

156167

168+
After v2.0, the database is created automatically at initial startup, and updated automatically by setting `ILA_AUTO_UPDATE` which updates twice weekly with default setting.
169+
170+
157171
## How to use with an example
158172

159173
When you need only geographic coordinates, please set "ILA_FIELDS=latitude,longitude".
@@ -219,6 +233,7 @@ This library supports Node.js >= 14 for ESM and CJS.
219233
There are multiple licenses in this library, one for the software library, and the others for the datadata.
220234
Please read the LICENSE and EULA files for details.
221235

236+
222237
The license for the software itself is published under MIT License by [sapics](https://github.com/sapics).
223238

224239

@@ -229,8 +244,7 @@ The GeoLite2 database comes with certain restrictions and obligations, most nota
229244
- to identify specific households or individuals.
230245

231246
You can read [the latest version of GeoLite2 EULA](https://www.maxmind.com/en/geolite2/eula).
232-
GeoLite2 database is provided under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) by [MaxMind](https://www.maxmind.com/), so, you must create attribusion to [MaxMind](https://www.maxmind.com/) for using GeoLite2 database.
247+
GeoLite2 database is provided under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) by [MaxMind](https://www.maxmind.com/), so, you need to create attribusion to [MaxMind](https://www.maxmind.com/) for using GeoLite2 database.
233248

234249

235250
The database of [Countries](https://github.com/annexare/Countries) is published under MIT license by [Annexare Studio](https://annexare.com/).
236-

cjs/db.cjs

+40-38
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,10 @@ const update = async () => {
3131
if(refreshTmpDir || !fsSync.existsSync(setting.tmpDataDir)){
3232

3333
await rimraf(setting.tmpDataDir)
34-
await fs.mkdir(setting.tmpDataDir)
34+
await fs.mkdir(setting.tmpDataDir, {recursive: true})
35+
}
36+
if (!fsSync.existsSync(setting.fieldDir)){
37+
await fs.mkdir(setting.fieldDir, {recursive: true})
3538
}
3639

3740
console.log('Downloading database')
@@ -63,22 +66,18 @@ const update = async () => {
6366
}
6467
if(SHA256_RESULT){
6568

66-
await fs.writeFile(path.join(setting.dataDir, setting.series + '-' + setting.dataType + '-CSV.zip.sha256'), SHA256_RESULT)
69+
await fs.writeFile(path.join(setting.fieldDir, setting.series + '-' + setting.dataType + '-CSV.zip.sha256'), SHA256_RESULT)
6770
}
6871

69-
var tmpFiles = fsSync.readdirSync(setting.dataDir).filter(file => file.endsWith('.tmp'))
72+
var tmpFiles = fsSync.readdirSync(setting.fieldDir).filter(file => file.endsWith('.tmp'))
7073
for(var tmpFile of tmpFiles){
71-
await fs.rename(path.join(setting.dataDir, tmpFile), path.join(setting.dataDir, tmpFile.replace('.tmp', '')))
72-
}
73-
if(setting.smallMemory){
74-
try{
75-
await rimraf(path.join(setting.dataDir, 'v4'), {recursive: true, force: true})
76-
await rimraf(path.join(setting.dataDir, 'v6'), {recursive: true, force: true})
77-
}catch(e){
78-
console.log(e)
79-
}
80-
await fs.rename(path.join(setting.dataDir, 'v4-tmp'), path.join(setting.dataDir, 'v4'))
81-
await fs.rename(path.join(setting.dataDir, 'v6-tmp'), path.join(setting.dataDir, 'v6'))
74+
await fs.rename(path.join(setting.fieldDir, tmpFile), path.join(setting.fieldDir, tmpFile.replace('.tmp', '')))
75+
}
76+
if(setting.smallMemory && !setting.runningUpdate){
77+
await fs.cp(path.join(setting.fieldDir, 'v4-tmp'), path.join(setting.fieldDir, 'v4'), {recursive: true, force: true})
78+
await fs.cp(path.join(setting.fieldDir, 'v6-tmp'), path.join(setting.fieldDir, 'v6'), {recursive: true, force: true})
79+
rimraf(path.join(setting.fieldDir, 'v4-tmp')).catch(console.warn)
80+
rimraf(path.join(setting.fieldDir, 'v6-tmp')).catch(console.warn)
8281
}
8382

8483
if(setting.browserType){
@@ -138,9 +137,9 @@ const dbipLocation = async () => {
138137
v4Buf3.writeInt32LE(v4[i][2], i * 8)
139138
v4Buf3.writeInt32LE(v4[i][3], i * 8 + 4)
140139
}
141-
fsSync.writeFileSync(path.join(setting.dataDir, '4-1.dat'), v4Buf1)
142-
fsSync.writeFileSync(path.join(setting.dataDir, '4-2.dat'), v4Buf2)
143-
fsSync.writeFileSync(path.join(setting.dataDir, '4-3.dat'), v4Buf3)
140+
fsSync.writeFileSync(path.join(setting.fieldDir, '4-1.dat'), v4Buf1)
141+
fsSync.writeFileSync(path.join(setting.fieldDir, '4-2.dat'), v4Buf2)
142+
fsSync.writeFileSync(path.join(setting.fieldDir, '4-3.dat'), v4Buf3)
144143

145144
var v6Buf1 = Buffer.alloc(v6.length * 8)
146145
var v6Buf2 = Buffer.alloc(v6.length * 8)
@@ -151,9 +150,9 @@ const dbipLocation = async () => {
151150
v6Buf3.writeInt32LE(v6[i][2], i * 8)
152151
v6Buf3.writeInt32LE(v6[i][3], i * 8 + 4)
153152
}
154-
fsSync.writeFileSync(path.join(setting.dataDir, '6-1.dat'), v6Buf1)
155-
fsSync.writeFileSync(path.join(setting.dataDir, '6-2.dat'), v6Buf2)
156-
fsSync.writeFileSync(path.join(setting.dataDir, '6-3.dat'), v6Buf3)
153+
fsSync.writeFileSync(path.join(setting.fieldDir, '6-1.dat'), v6Buf1)
154+
fsSync.writeFileSync(path.join(setting.fieldDir, '6-2.dat'), v6Buf2)
155+
fsSync.writeFileSync(path.join(setting.fieldDir, '6-3.dat'), v6Buf3)
157156
resolve()
158157
})
159158
.on('data', arr => {
@@ -185,21 +184,21 @@ const dbipLocation = async () => {
185184
}
186185

187186
const createBrowserIndex = async (type) => {
188-
const exportDir = path.join(setting.dataDir, type)
187+
const exportDir = path.join(setting.fieldDir, type)
189188
await fs.rm(path.join(exportDir, '4'), {recursive: true, force: true})
190189
await fs.mkdir(path.join(exportDir, '4'), {recursive: true})
191190
await fs.rm(path.join(exportDir, '6'), {recursive: true, force: true})
192191
await fs.mkdir(path.join(exportDir, '6'), {recursive: true})
193192

194193
const IndexSize = type === 'country' ? 1024 : 2048
195194

196-
var startBuf = await fs.readFile(path.join(setting.dataDir, '4-1.dat'))
195+
var startBuf = await fs.readFile(path.join(setting.fieldDir, '4-1.dat'))
197196
var startList = new Uint32Array(startBuf.buffer)
198197
var len = startList.length, indexList = new Uint32Array(IndexSize)
199198
var i, j, k
200-
var endBuf = await fs.readFile(path.join(setting.dataDir, '4-2.dat'))
199+
var endBuf = await fs.readFile(path.join(setting.fieldDir, '4-2.dat'))
201200
var endList = new Uint32Array(endBuf.buffer)
202-
var dbInfo = await fs.readFile(path.join(setting.dataDir, '4-3.dat'))
201+
var dbInfo = await fs.readFile(path.join(setting.fieldDir, '4-3.dat'))
203202
var dbList = type === 'country' ? new Uint16Array(dbInfo.buffer) : new Int32Array(dbInfo.buffer)
204203
var recordSize = setting.mainRecordSize + 8
205204
for(i = 0; i < IndexSize; ++i){
@@ -223,13 +222,13 @@ const createBrowserIndex = async (type) => {
223222
}
224223
await fs.writeFile(path.join(exportDir, '4.idx'), Buffer.from(indexList.buffer))
225224

226-
startBuf = await fs.readFile(path.join(setting.dataDir, '6-1.dat'))
225+
startBuf = await fs.readFile(path.join(setting.fieldDir, '6-1.dat'))
227226
startList = new BigUint64Array(startBuf.buffer)
228227
len = startList.length
229228
indexList = new BigUint64Array(IndexSize)
230-
endBuf = await fs.readFile(path.join(setting.dataDir, '6-2.dat'))
229+
endBuf = await fs.readFile(path.join(setting.fieldDir, '6-2.dat'))
231230
endList = new BigUint64Array(endBuf.buffer)
232-
dbInfo = await fs.readFile(path.join(setting.dataDir, '6-3.dat'))
231+
dbInfo = await fs.readFile(path.join(setting.fieldDir, '6-3.dat'))
233232
dbList = type === 'country' ? new Uint16Array(dbInfo.buffer) : new Int32Array(dbInfo.buffer)
234233
recordSize = setting.mainRecordSize + 16
235234
for(i = 0; i < IndexSize; ++i){
@@ -254,11 +253,14 @@ const createBrowserIndex = async (type) => {
254253
await fs.writeFile(path.join(exportDir, '6.idx'), Buffer.from(indexList.buffer))
255254

256255
var exPath = path.join(__dirname, '..', 'browser', type)
257-
await fs.rm(exPath, {recursive: true, force: true})
256+
await fs.rm(path.join(exPath, '4'), {recursive: true, force: true})
257+
await fs.rm(path.join(exPath, '6'), {recursive: true, force: true})
258258
await fs.cp(exportDir, exPath, {recursive: true})
259259
exPath = path.join(__dirname, '..', 'browser', type + '-extra')
260-
await fs.rm(exPath, {recursive: true, force: true})
260+
await fs.rm(path.join(exPath, '4'), {recursive: true, force: true})
261+
await fs.rm(path.join(exPath, '6'), {recursive: true, force: true})
261262
await fs.cp(exportDir, exPath, {recursive: true})
263+
await fs.rm(exportDir, {recursive: true, force: true})
262264
}
263265

264266
var SHA256_RESULT
@@ -293,7 +295,7 @@ const downloadZip = async () => {
293295
}
294296
var sha256 = r[0], data = ''
295297
try{
296-
data = await fs.readFile(path.join(setting.dataDir, database.edition + '.zip.sha256'), 'utf8')
298+
data = await fs.readFile(path.join(setting.fieldDir, database.edition + '.zip.sha256'), 'utf8')
297299
}catch(e){
298300
data = ''
299301
}
@@ -394,7 +396,7 @@ const createData = async (src) => {
394396
const createSmallMemoryFile = (ws, ipv4, line, buffer2, buffer3) => {
395397
const [ _dir, file, offset ] = getSmallMemoryFile(line, ipv4 ? setting.v4 : setting.v6, true)
396398
if(offset === 0){
397-
const dir = path.join(setting.dataDir, _dir)
399+
const dir = path.join(setting.fieldDir, _dir)
398400
if(ws) ws.end()
399401
if(file === '_0' && !fsSync.existsSync(dir)){
400402
fsSync.mkdirSync(dir, {recursive: true})
@@ -417,13 +419,13 @@ const createMainData = async (file, mapDatas) => {
417419
var ipv4 = file.endsWith('v4.csv')
418420
var ipv = ipv4 ? 4 : 6
419421
var rs = fsSync.createReadStream(path.join(setting.tmpDataDir, file))
420-
var ws1 = fsSync.createWriteStream(path.join(setting.dataDir, ipv + '-1.dat.tmp'), {highWaterMark: 1024*1024})
422+
var ws1 = fsSync.createWriteStream(path.join(setting.fieldDir, ipv + '-1.dat.tmp'), {highWaterMark: 1024*1024})
421423
if(!setting.smallMemory){
422-
var ws2 = fsSync.createWriteStream(path.join(setting.dataDir, ipv + '-2.dat.tmp'), {highWaterMark: 1024*1024})
423-
var ws3 = fsSync.createWriteStream(path.join(setting.dataDir, ipv + '-3.dat.tmp'), {highWaterMark: 1024*1024})
424+
var ws2 = fsSync.createWriteStream(path.join(setting.fieldDir, ipv + '-2.dat.tmp'), {highWaterMark: 1024*1024})
425+
var ws3 = fsSync.createWriteStream(path.join(setting.fieldDir, ipv + '-3.dat.tmp'), {highWaterMark: 1024*1024})
424426
} else {
425427
var ws = null
426-
var dir = path.join(setting.dataDir, 'v' + ipv + '-tmp')
428+
var dir = path.join(setting.fieldDir, 'v' + ipv + '-tmp')
427429
if(fsSync.existsSync(dir)){
428430
await fs.rm(dir, {recursive: true, force: true})
429431
}
@@ -706,8 +708,8 @@ const minifyMapData = (mapDatas) => {
706708
const createMapData = async (mapDatas) => {
707709
var locIdList = mapDatas.pop()
708710
var mapData0 = mapDatas[0]
709-
var ws1 = fsSync.createWriteStream(path.join(setting.dataDir, 'location.dat.tmp'))
710-
var ws2 = fsSync.createWriteStream(path.join(setting.dataDir, 'name.dat.tmp'))
711+
var ws1 = fsSync.createWriteStream(path.join(setting.fieldDir, 'location.dat.tmp'))
712+
var ws2 = fsSync.createWriteStream(path.join(setting.fieldDir, 'name.dat.tmp'))
711713
var cityHash = {}, euHash = {}
712714
sub1Database = {}, sub2Database = {}, timezoneDatabase = {}
713715
sub1Count = 0, sub2Count = 0, timezoneCount = 0
@@ -783,7 +785,7 @@ const createMapData = async (mapDatas) => {
783785
if(!setting.mainFieldHash.area) delete hash.area
784786
if(!setting.locFieldHash.eu) delete hash.eu
785787
if(Object.keys(hash).length > 0){
786-
await fs.writeFile(path.join(setting.dataDir, 'sub.json.tmp'), JSON.stringify(hash))
788+
await fs.writeFile(path.join(setting.fieldDir, 'sub.json.tmp'), JSON.stringify(hash))
787789
}
788790
sub1Database = sub2Database = timezoneDatabase = areaDatabase = null
789791
mapDatas.length = 0

0 commit comments

Comments
 (0)