Skip to content

Releases: lgraubner/sitemap-generator

7.1.0

13 Aug 00:51
Compare
Choose a tag to compare

Added getPaths and getStats methods. Also provide stats object to done event callback.

7.0.2

10 Aug 18:46
Compare
Choose a tag to compare

Fixed XML markup.

7.0.1

09 Aug 09:27
Compare
Choose a tag to compare

Increase required Node.js version to >=6.

7.0.0

03 Aug 21:02
Compare
Choose a tag to compare

Total rewrite of the sitemap-generator package. Includes significant performance improvements as the sitemap gets streamed right to the hard drive.

Breaking

  • restrictToBasePath option got removed and is applied by default
  • The sitemap gets written right to the hard drive instead of returning a sitemap string
  • Therefore added a filepath option
  • Cleaned up events and sorted to add, error, ignored, done
  • Added getStatus method

6.0.0

26 Feb 15:31
Compare
Choose a tag to compare
  • Fixed merge request #8
  • Account Google max URL limit of 50000 (breaking!)

The max url change breaks the v5 behaviour. The done callback does not return a single string with the sitemap but instead returns an array. In case the limit is reached it contains more than one sitemap where the first item is a sitemapindex file.

5.0.1

15 Jan 17:16
Compare
Choose a tag to compare

Fixed crawling bug (#9). Thanks to MisterKatiyar!

5.0.0

04 Nov 09:08
Compare
Choose a tag to compare
  • Breaking changes
    • Port option is deprecated, specify non-standard ports in url
  • Updated underlying crawler
  • Problems with robots.txt fetching and ports fixed

v4.1.1

04 Jul 07:39
Compare
Choose a tag to compare

Fixes appearance of port in the initial URL.

v4.1.0

03 Jul 10:29
Compare
Choose a tag to compare
  • Added support for robots meta tag noindex
  • Fixed #2 by setting port if https is provided

v4.0.0

10 May 13:28
Compare
Choose a tag to compare

This is the decoupled sitemap-generator from sitemap-generator-cli. Hence starting with version 4.0.0.

  • Total code rewrite
  • Events using node's EventEmitter
  • Own link parser for crawler including base tag, robots.txt and all kind of link support
  • Enhanced tests